Test Report: Docker_Linux_crio 21409

                    
                      2aa028e6c9ae4a79883616b371bbf57b9811dc19:2025-10-14:41906
                    
                

Test fail (56/166)

Order failed test Duration
27 TestAddons/Setup 521.01
38 TestErrorSpam/setup 498.66
47 TestFunctional/serial/StartWithProxy 503.57
49 TestFunctional/serial/SoftStart 366.74
51 TestFunctional/serial/KubectlGetPods 2.21
61 TestFunctional/serial/MinikubeKubectlCmd 2.2
62 TestFunctional/serial/MinikubeKubectlCmdDirectly 2.3
63 TestFunctional/serial/ExtraConfig 737.02
64 TestFunctional/serial/ComponentHealth 1.96
67 TestFunctional/serial/InvalidService 0.05
70 TestFunctional/parallel/DashboardCmd 1.76
73 TestFunctional/parallel/StatusCmd 3.47
77 TestFunctional/parallel/ServiceCmdConnect 1.64
79 TestFunctional/parallel/PersistentVolumeClaim 241.57
83 TestFunctional/parallel/MySQL 1.44
89 TestFunctional/parallel/NodeLabels 1.39
94 TestFunctional/parallel/ServiceCmd/DeployApp 0.06
95 TestFunctional/parallel/ServiceCmd/List 0.33
96 TestFunctional/parallel/ServiceCmd/JSONOutput 0.33
97 TestFunctional/parallel/ServiceCmd/HTTPS 0.33
98 TestFunctional/parallel/ServiceCmd/Format 0.36
100 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.37
101 TestFunctional/parallel/ServiceCmd/URL 0.35
104 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 0.08
105 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 116.16
109 TestFunctional/parallel/MountCmd/any-port 2.56
119 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.93
120 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.93
121 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.62
123 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.34
125 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.21
126 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.38
141 TestMultiControlPlane/serial/StartCluster 505.46
142 TestMultiControlPlane/serial/DeployApp 79.53
143 TestMultiControlPlane/serial/PingHostFromPods 1.41
144 TestMultiControlPlane/serial/AddWorkerNode 1.57
145 TestMultiControlPlane/serial/NodeLabels 1.38
146 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.64
147 TestMultiControlPlane/serial/CopyFile 1.63
148 TestMultiControlPlane/serial/StopSecondaryNode 1.71
149 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 1.63
150 TestMultiControlPlane/serial/RestartSecondaryNode 54.93
151 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.64
152 TestMultiControlPlane/serial/RestartClusterKeepsNodes 370.31
153 TestMultiControlPlane/serial/DeleteSecondaryNode 1.9
154 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 1.65
155 TestMultiControlPlane/serial/StopCluster 1.38
156 TestMultiControlPlane/serial/RestartCluster 368.46
157 TestMultiControlPlane/serial/DegradedAfterClusterRestart 1.65
158 TestMultiControlPlane/serial/AddSecondaryNode 1.6
159 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.65
163 TestJSONOutput/start/Command 497.63
166 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
167 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
194 TestMinikubeProfile 503.5
221 TestMultiNode/serial/ValidateNameConflict 7200.071
x
+
TestAddons/Setup (521.01s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-995790 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p addons-995790 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: exit status 80 (8m40.970079354s)

                                                
                                                
-- stdout --
	* [addons-995790] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21409
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21409-413763/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-413763/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "addons-995790" primary control-plane node in "addons-995790" cluster
	* Pulling base image v0.0.48-1759745255-21703 ...
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 19:14:54.143098  418677 out.go:360] Setting OutFile to fd 1 ...
	I1014 19:14:54.143398  418677 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 19:14:54.143409  418677 out.go:374] Setting ErrFile to fd 2...
	I1014 19:14:54.143413  418677 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 19:14:54.143632  418677 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-413763/.minikube/bin
	I1014 19:14:54.144229  418677 out.go:368] Setting JSON to false
	I1014 19:14:54.145235  418677 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":7040,"bootTime":1760462254,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1014 19:14:54.145345  418677 start.go:141] virtualization: kvm guest
	I1014 19:14:54.147363  418677 out.go:179] * [addons-995790] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1014 19:14:54.149119  418677 out.go:179]   - MINIKUBE_LOCATION=21409
	I1014 19:14:54.149122  418677 notify.go:220] Checking for updates...
	I1014 19:14:54.150463  418677 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 19:14:54.152135  418677 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-413763/kubeconfig
	I1014 19:14:54.153561  418677 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-413763/.minikube
	I1014 19:14:54.154959  418677 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1014 19:14:54.156505  418677 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 19:14:54.158035  418677 driver.go:421] Setting default libvirt URI to qemu:///system
	I1014 19:14:54.183220  418677 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1014 19:14:54.183324  418677 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 19:14:54.245784  418677 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:50 SystemTime:2025-10-14 19:14:54.234834129 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1014 19:14:54.245907  418677 docker.go:318] overlay module found
	I1014 19:14:54.247538  418677 out.go:179] * Using the docker driver based on user configuration
	I1014 19:14:54.248661  418677 start.go:305] selected driver: docker
	I1014 19:14:54.248676  418677 start.go:925] validating driver "docker" against <nil>
	I1014 19:14:54.248688  418677 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 19:14:54.249214  418677 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 19:14:54.311539  418677 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:50 SystemTime:2025-10-14 19:14:54.301353849 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1014 19:14:54.311819  418677 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1014 19:14:54.312102  418677 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 19:14:54.314062  418677 out.go:179] * Using Docker driver with root privileges
	I1014 19:14:54.315525  418677 cni.go:84] Creating CNI manager for ""
	I1014 19:14:54.315606  418677 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1014 19:14:54.315621  418677 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1014 19:14:54.315715  418677 start.go:349] cluster config:
	{Name:addons-995790 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-995790 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1014 19:14:54.317185  418677 out.go:179] * Starting "addons-995790" primary control-plane node in "addons-995790" cluster
	I1014 19:14:54.318636  418677 cache.go:123] Beginning downloading kic base image for docker with crio
	I1014 19:14:54.320059  418677 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1014 19:14:54.321211  418677 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 19:14:54.321257  418677 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-413763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1014 19:14:54.321267  418677 cache.go:58] Caching tarball of preloaded images
	I1014 19:14:54.321325  418677 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1014 19:14:54.321367  418677 preload.go:233] Found /home/jenkins/minikube-integration/21409-413763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1014 19:14:54.321375  418677 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1014 19:14:54.321700  418677 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/addons-995790/config.json ...
	I1014 19:14:54.321726  418677 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/addons-995790/config.json: {Name:mk863fd1f62ebe29846bf9c83671c965452917a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 19:14:54.339156  418677 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 to local cache
	I1014 19:14:54.339307  418677 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local cache directory
	I1014 19:14:54.339328  418677 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local cache directory, skipping pull
	I1014 19:14:54.339333  418677 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in cache, skipping pull
	I1014 19:14:54.339340  418677 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 as a tarball
	I1014 19:14:54.339348  418677 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 from local cache
	I1014 19:15:07.142673  418677 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 from cached tarball
	I1014 19:15:07.142721  418677 cache.go:232] Successfully downloaded all kic artifacts
	I1014 19:15:07.142784  418677 start.go:360] acquireMachinesLock for addons-995790: {Name:mk266b39183b20e3ac85090b638bd67120f36dfe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 19:15:07.142932  418677 start.go:364] duration metric: took 115.304µs to acquireMachinesLock for "addons-995790"
	I1014 19:15:07.142971  418677 start.go:93] Provisioning new machine with config: &{Name:addons-995790 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-995790 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 19:15:07.143044  418677 start.go:125] createHost starting for "" (driver="docker")
	I1014 19:15:07.145390  418677 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1014 19:15:07.145624  418677 start.go:159] libmachine.API.Create for "addons-995790" (driver="docker")
	I1014 19:15:07.145656  418677 client.go:168] LocalClient.Create starting
	I1014 19:15:07.145846  418677 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem
	I1014 19:15:07.434905  418677 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem
	I1014 19:15:07.715299  418677 cli_runner.go:164] Run: docker network inspect addons-995790 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1014 19:15:07.733452  418677 cli_runner.go:211] docker network inspect addons-995790 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1014 19:15:07.733522  418677 network_create.go:284] running [docker network inspect addons-995790] to gather additional debugging logs...
	I1014 19:15:07.733543  418677 cli_runner.go:164] Run: docker network inspect addons-995790
	W1014 19:15:07.750744  418677 cli_runner.go:211] docker network inspect addons-995790 returned with exit code 1
	I1014 19:15:07.750793  418677 network_create.go:287] error running [docker network inspect addons-995790]: docker network inspect addons-995790: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-995790 not found
	I1014 19:15:07.750815  418677 network_create.go:289] output of [docker network inspect addons-995790]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-995790 not found
	
	** /stderr **
	I1014 19:15:07.750926  418677 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1014 19:15:07.768616  418677 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00188ad60}
	I1014 19:15:07.768676  418677 network_create.go:124] attempt to create docker network addons-995790 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1014 19:15:07.768727  418677 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-995790 addons-995790
	I1014 19:15:07.958947  418677 network_create.go:108] docker network addons-995790 192.168.49.0/24 created
	I1014 19:15:07.959032  418677 kic.go:121] calculated static IP "192.168.49.2" for the "addons-995790" container
	I1014 19:15:07.959107  418677 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1014 19:15:07.977066  418677 cli_runner.go:164] Run: docker volume create addons-995790 --label name.minikube.sigs.k8s.io=addons-995790 --label created_by.minikube.sigs.k8s.io=true
	I1014 19:15:08.056989  418677 oci.go:103] Successfully created a docker volume addons-995790
	I1014 19:15:08.057092  418677 cli_runner.go:164] Run: docker run --rm --name addons-995790-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-995790 --entrypoint /usr/bin/test -v addons-995790:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1014 19:15:14.536478  418677 cli_runner.go:217] Completed: docker run --rm --name addons-995790-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-995790 --entrypoint /usr/bin/test -v addons-995790:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib: (6.479342619s)
	I1014 19:15:14.536549  418677 oci.go:107] Successfully prepared a docker volume addons-995790
	I1014 19:15:14.536567  418677 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 19:15:14.536595  418677 kic.go:194] Starting extracting preloaded images to volume ...
	I1014 19:15:14.536653  418677 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21409-413763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-995790:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	I1014 19:15:18.947715  418677 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21409-413763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-995790:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (4.411003721s)
	I1014 19:15:18.947774  418677 kic.go:203] duration metric: took 4.411159233s to extract preloaded images to volume ...
	W1014 19:15:18.947868  418677 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1014 19:15:18.947924  418677 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1014 19:15:18.947967  418677 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1014 19:15:19.004530  418677 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-995790 --name addons-995790 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-995790 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-995790 --network addons-995790 --ip 192.168.49.2 --volume addons-995790:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1014 19:15:19.288040  418677 cli_runner.go:164] Run: docker container inspect addons-995790 --format={{.State.Running}}
	I1014 19:15:19.307673  418677 cli_runner.go:164] Run: docker container inspect addons-995790 --format={{.State.Status}}
	I1014 19:15:19.326235  418677 cli_runner.go:164] Run: docker exec addons-995790 stat /var/lib/dpkg/alternatives/iptables
	I1014 19:15:19.373676  418677 oci.go:144] the created container "addons-995790" has a running status.
	I1014 19:15:19.373711  418677 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21409-413763/.minikube/machines/addons-995790/id_rsa...
	I1014 19:15:19.438478  418677 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21409-413763/.minikube/machines/addons-995790/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1014 19:15:19.467598  418677 cli_runner.go:164] Run: docker container inspect addons-995790 --format={{.State.Status}}
	I1014 19:15:19.487585  418677 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1014 19:15:19.487617  418677 kic_runner.go:114] Args: [docker exec --privileged addons-995790 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1014 19:15:19.530777  418677 cli_runner.go:164] Run: docker container inspect addons-995790 --format={{.State.Status}}
	I1014 19:15:19.553491  418677 machine.go:93] provisionDockerMachine start ...
	I1014 19:15:19.553635  418677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-995790
	I1014 19:15:19.573226  418677 main.go:141] libmachine: Using SSH client type: native
	I1014 19:15:19.573505  418677 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32888 <nil> <nil>}
	I1014 19:15:19.573520  418677 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 19:15:19.574283  418677 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:49188->127.0.0.1:32888: read: connection reset by peer
	I1014 19:15:22.724358  418677 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-995790
	
	I1014 19:15:22.724401  418677 ubuntu.go:182] provisioning hostname "addons-995790"
	I1014 19:15:22.724470  418677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-995790
	I1014 19:15:22.743022  418677 main.go:141] libmachine: Using SSH client type: native
	I1014 19:15:22.743269  418677 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32888 <nil> <nil>}
	I1014 19:15:22.743284  418677 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-995790 && echo "addons-995790" | sudo tee /etc/hostname
	I1014 19:15:22.900512  418677 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-995790
	
	I1014 19:15:22.900585  418677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-995790
	I1014 19:15:22.920031  418677 main.go:141] libmachine: Using SSH client type: native
	I1014 19:15:22.920276  418677 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32888 <nil> <nil>}
	I1014 19:15:22.920295  418677 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-995790' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-995790/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-995790' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 19:15:23.068004  418677 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 19:15:23.068050  418677 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-413763/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-413763/.minikube}
	I1014 19:15:23.068081  418677 ubuntu.go:190] setting up certificates
	I1014 19:15:23.068102  418677 provision.go:84] configureAuth start
	I1014 19:15:23.068156  418677 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-995790
	I1014 19:15:23.086311  418677 provision.go:143] copyHostCerts
	I1014 19:15:23.086414  418677 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem (1078 bytes)
	I1014 19:15:23.086563  418677 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem (1123 bytes)
	I1014 19:15:23.086676  418677 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem (1675 bytes)
	I1014 19:15:23.086801  418677 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca-key.pem org=jenkins.addons-995790 san=[127.0.0.1 192.168.49.2 addons-995790 localhost minikube]
	I1014 19:15:23.273431  418677 provision.go:177] copyRemoteCerts
	I1014 19:15:23.273511  418677 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 19:15:23.273574  418677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-995790
	I1014 19:15:23.291916  418677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/addons-995790/id_rsa Username:docker}
	I1014 19:15:23.396479  418677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 19:15:23.416691  418677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1014 19:15:23.434262  418677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1014 19:15:23.453288  418677 provision.go:87] duration metric: took 385.170243ms to configureAuth
	I1014 19:15:23.453319  418677 ubuntu.go:206] setting minikube options for container-runtime
	I1014 19:15:23.453535  418677 config.go:182] Loaded profile config "addons-995790": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 19:15:23.453680  418677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-995790
	I1014 19:15:23.471914  418677 main.go:141] libmachine: Using SSH client type: native
	I1014 19:15:23.472137  418677 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32888 <nil> <nil>}
	I1014 19:15:23.472152  418677 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 19:15:23.733237  418677 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 19:15:23.733270  418677 machine.go:96] duration metric: took 4.179754524s to provisionDockerMachine
	I1014 19:15:23.733282  418677 client.go:171] duration metric: took 16.587618271s to LocalClient.Create
	I1014 19:15:23.733305  418677 start.go:167] duration metric: took 16.587684582s to libmachine.API.Create "addons-995790"
	I1014 19:15:23.733316  418677 start.go:293] postStartSetup for "addons-995790" (driver="docker")
	I1014 19:15:23.733327  418677 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 19:15:23.733380  418677 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 19:15:23.733412  418677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-995790
	I1014 19:15:23.751965  418677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/addons-995790/id_rsa Username:docker}
	I1014 19:15:23.859846  418677 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 19:15:23.863838  418677 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1014 19:15:23.863870  418677 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1014 19:15:23.863883  418677 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-413763/.minikube/addons for local assets ...
	I1014 19:15:23.863992  418677 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-413763/.minikube/files for local assets ...
	I1014 19:15:23.864025  418677 start.go:296] duration metric: took 130.703561ms for postStartSetup
	I1014 19:15:23.864349  418677 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-995790
	I1014 19:15:23.883188  418677 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/addons-995790/config.json ...
	I1014 19:15:23.883467  418677 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1014 19:15:23.883511  418677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-995790
	I1014 19:15:23.901674  418677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/addons-995790/id_rsa Username:docker}
	I1014 19:15:24.004076  418677 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1014 19:15:24.009323  418677 start.go:128] duration metric: took 16.866258262s to createHost
	I1014 19:15:24.009355  418677 start.go:83] releasing machines lock for "addons-995790", held for 16.866403979s
	I1014 19:15:24.009448  418677 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-995790
	I1014 19:15:24.027603  418677 ssh_runner.go:195] Run: cat /version.json
	I1014 19:15:24.027655  418677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-995790
	I1014 19:15:24.027682  418677 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 19:15:24.027749  418677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-995790
	I1014 19:15:24.047018  418677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/addons-995790/id_rsa Username:docker}
	I1014 19:15:24.047980  418677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/addons-995790/id_rsa Username:docker}
	I1014 19:15:24.147333  418677 ssh_runner.go:195] Run: systemctl --version
	I1014 19:15:24.203010  418677 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 19:15:24.239265  418677 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 19:15:24.244247  418677 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 19:15:24.244326  418677 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 19:15:24.271213  418677 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1014 19:15:24.271241  418677 start.go:495] detecting cgroup driver to use...
	I1014 19:15:24.271283  418677 detect.go:190] detected "systemd" cgroup driver on host os
	I1014 19:15:24.271338  418677 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 19:15:24.288582  418677 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 19:15:24.302136  418677 docker.go:218] disabling cri-docker service (if available) ...
	I1014 19:15:24.302202  418677 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 19:15:24.319309  418677 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 19:15:24.338258  418677 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 19:15:24.421166  418677 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 19:15:24.507344  418677 docker.go:234] disabling docker service ...
	I1014 19:15:24.507413  418677 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 19:15:24.527160  418677 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 19:15:24.540998  418677 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 19:15:24.619915  418677 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 19:15:24.702381  418677 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 19:15:24.715637  418677 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 19:15:24.730967  418677 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1014 19:15:24.731041  418677 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 19:15:24.741751  418677 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1014 19:15:24.741850  418677 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 19:15:24.751327  418677 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 19:15:24.760660  418677 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 19:15:24.769496  418677 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 19:15:24.778235  418677 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 19:15:24.787210  418677 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 19:15:24.800836  418677 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 19:15:24.809821  418677 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 19:15:24.818502  418677 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 19:15:24.826249  418677 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 19:15:24.908825  418677 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 19:15:25.018435  418677 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 19:15:25.018512  418677 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 19:15:25.022771  418677 start.go:563] Will wait 60s for crictl version
	I1014 19:15:25.022829  418677 ssh_runner.go:195] Run: which crictl
	I1014 19:15:25.026593  418677 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1014 19:15:25.051748  418677 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1014 19:15:25.051887  418677 ssh_runner.go:195] Run: crio --version
	I1014 19:15:25.082124  418677 ssh_runner.go:195] Run: crio --version
	I1014 19:15:25.114819  418677 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1014 19:15:25.116051  418677 cli_runner.go:164] Run: docker network inspect addons-995790 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1014 19:15:25.133238  418677 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1014 19:15:25.137615  418677 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 19:15:25.149084  418677 kubeadm.go:883] updating cluster {Name:addons-995790 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-995790 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 19:15:25.149215  418677 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 19:15:25.149264  418677 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 19:15:25.183200  418677 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 19:15:25.183223  418677 crio.go:433] Images already preloaded, skipping extraction
	I1014 19:15:25.183270  418677 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 19:15:25.211224  418677 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 19:15:25.211248  418677 cache_images.go:85] Images are preloaded, skipping loading
	I1014 19:15:25.211257  418677 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1014 19:15:25.211378  418677 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-995790 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-995790 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 19:15:25.211465  418677 ssh_runner.go:195] Run: crio config
	I1014 19:15:25.258842  418677 cni.go:84] Creating CNI manager for ""
	I1014 19:15:25.258862  418677 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1014 19:15:25.258884  418677 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1014 19:15:25.258909  418677 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-995790 NodeName:addons-995790 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1014 19:15:25.259030  418677 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-995790"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 19:15:25.259096  418677 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1014 19:15:25.268016  418677 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 19:15:25.268081  418677 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1014 19:15:25.276455  418677 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1014 19:15:25.289861  418677 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 19:15:25.306253  418677 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1014 19:15:25.319395  418677 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1014 19:15:25.323228  418677 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 19:15:25.334293  418677 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 19:15:25.410090  418677 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 19:15:25.436485  418677 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/addons-995790 for IP: 192.168.49.2
	I1014 19:15:25.436515  418677 certs.go:195] generating shared ca certs ...
	I1014 19:15:25.436536  418677 certs.go:227] acquiring lock for ca certs: {Name:mk332cc83f1e1747534fc81e896bed94f24941d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 19:15:25.436737  418677 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.key
	I1014 19:15:25.557889  418677 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt ...
	I1014 19:15:25.557928  418677 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt: {Name:mk2101298a47cdfc6a7535a5a89a43f86399641b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 19:15:25.558191  418677 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-413763/.minikube/ca.key ...
	I1014 19:15:25.558212  418677 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/ca.key: {Name:mk72a468f76fb8f554fa7e2da729b4a33b35df52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 19:15:25.558339  418677 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.key
	I1014 19:15:25.780710  418677 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.crt ...
	I1014 19:15:25.780744  418677 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.crt: {Name:mk8a4f460d1d6423585fbeb378daff541f57ef46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 19:15:25.780971  418677 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.key ...
	I1014 19:15:25.780996  418677 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.key: {Name:mka86b0277830100ef51b2ba9ab1ab8b3c14e1f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 19:15:25.781119  418677 certs.go:257] generating profile certs ...
	I1014 19:15:25.781181  418677 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/addons-995790/client.key
	I1014 19:15:25.781197  418677 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/addons-995790/client.crt with IP's: []
	I1014 19:15:26.021360  418677 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/addons-995790/client.crt ...
	I1014 19:15:26.021395  418677 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/addons-995790/client.crt: {Name:mk568c3fb2b3ce7a619e65c16b9ccc7357b1de34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 19:15:26.022262  418677 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/addons-995790/client.key ...
	I1014 19:15:26.022285  418677 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/addons-995790/client.key: {Name:mkcbe05c68b1abcbf73dde4475efe992aa01dcfb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 19:15:26.022399  418677 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/addons-995790/apiserver.key.2cb922cf
	I1014 19:15:26.022431  418677 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/addons-995790/apiserver.crt.2cb922cf with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1014 19:15:26.181095  418677 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/addons-995790/apiserver.crt.2cb922cf ...
	I1014 19:15:26.181132  418677 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/addons-995790/apiserver.crt.2cb922cf: {Name:mk54a281e3240cd2ed152e6d5b8c0ca21fb3ed96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 19:15:26.181331  418677 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/addons-995790/apiserver.key.2cb922cf ...
	I1014 19:15:26.181350  418677 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/addons-995790/apiserver.key.2cb922cf: {Name:mkd2e457521989ab0cbe1fce8d998e1b7682489f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 19:15:26.181476  418677 certs.go:382] copying /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/addons-995790/apiserver.crt.2cb922cf -> /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/addons-995790/apiserver.crt
	I1014 19:15:26.181568  418677 certs.go:386] copying /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/addons-995790/apiserver.key.2cb922cf -> /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/addons-995790/apiserver.key
	I1014 19:15:26.181618  418677 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/addons-995790/proxy-client.key
	I1014 19:15:26.181644  418677 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/addons-995790/proxy-client.crt with IP's: []
	I1014 19:15:26.305564  418677 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/addons-995790/proxy-client.crt ...
	I1014 19:15:26.305595  418677 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/addons-995790/proxy-client.crt: {Name:mk0d3ce801fbf796b1b253618701baf984224cd5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 19:15:26.305779  418677 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/addons-995790/proxy-client.key ...
	I1014 19:15:26.305799  418677 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/addons-995790/proxy-client.key: {Name:mk971f001fe582ee61df229dd9241d3ce1e12713 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 19:15:26.306684  418677 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca-key.pem (1675 bytes)
	I1014 19:15:26.306726  418677 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem (1078 bytes)
	I1014 19:15:26.306747  418677 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem (1123 bytes)
	I1014 19:15:26.306804  418677 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem (1675 bytes)
	I1014 19:15:26.308080  418677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 19:15:26.328120  418677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1014 19:15:26.346466  418677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 19:15:26.364877  418677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1014 19:15:26.382948  418677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/addons-995790/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1014 19:15:26.400784  418677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/addons-995790/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1014 19:15:26.418956  418677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/addons-995790/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 19:15:26.437111  418677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/addons-995790/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1014 19:15:26.454998  418677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 19:15:26.476035  418677 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 19:15:26.489293  418677 ssh_runner.go:195] Run: openssl version
	I1014 19:15:26.496090  418677 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 19:15:26.508062  418677 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 19:15:26.512159  418677 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 19:15 /usr/share/ca-certificates/minikubeCA.pem
	I1014 19:15:26.512225  418677 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 19:15:26.546451  418677 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 19:15:26.555518  418677 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 19:15:26.559800  418677 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1014 19:15:26.559873  418677 kubeadm.go:400] StartCluster: {Name:addons-995790 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-995790 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 19:15:26.559972  418677 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 19:15:26.560030  418677 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 19:15:26.588800  418677 cri.go:89] found id: ""
	I1014 19:15:26.588892  418677 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 19:15:26.597437  418677 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 19:15:26.605988  418677 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1014 19:15:26.606048  418677 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 19:15:26.613996  418677 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 19:15:26.614018  418677 kubeadm.go:157] found existing configuration files:
	
	I1014 19:15:26.614062  418677 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 19:15:26.622005  418677 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 19:15:26.622055  418677 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 19:15:26.629686  418677 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 19:15:26.637534  418677 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 19:15:26.637595  418677 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 19:15:26.645355  418677 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 19:15:26.653337  418677 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 19:15:26.653398  418677 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 19:15:26.661244  418677 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 19:15:26.669176  418677 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 19:15:26.669240  418677 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 19:15:26.677064  418677 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1014 19:15:26.736796  418677 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1014 19:15:26.798564  418677 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1014 19:19:30.687033  418677 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1014 19:19:30.687284  418677 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1014 19:19:30.690377  418677 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1014 19:19:30.690500  418677 kubeadm.go:318] [preflight] Running pre-flight checks
	I1014 19:19:30.690689  418677 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1014 19:19:30.690818  418677 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1014 19:19:30.690897  418677 kubeadm.go:318] OS: Linux
	I1014 19:19:30.690990  418677 kubeadm.go:318] CGROUPS_CPU: enabled
	I1014 19:19:30.691065  418677 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1014 19:19:30.691137  418677 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1014 19:19:30.691214  418677 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1014 19:19:30.691289  418677 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1014 19:19:30.691377  418677 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1014 19:19:30.691469  418677 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1014 19:19:30.691539  418677 kubeadm.go:318] CGROUPS_IO: enabled
	I1014 19:19:30.691632  418677 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1014 19:19:30.691778  418677 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1014 19:19:30.691906  418677 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1014 19:19:30.691986  418677 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1014 19:19:30.694820  418677 out.go:252]   - Generating certificates and keys ...
	I1014 19:19:30.694984  418677 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1014 19:19:30.695092  418677 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1014 19:19:30.695205  418677 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1014 19:19:30.695277  418677 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1014 19:19:30.695362  418677 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1014 19:19:30.695410  418677 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1014 19:19:30.695458  418677 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1014 19:19:30.695553  418677 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-995790 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1014 19:19:30.695598  418677 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1014 19:19:30.695699  418677 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-995790 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1014 19:19:30.695811  418677 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1014 19:19:30.695884  418677 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1014 19:19:30.695938  418677 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1014 19:19:30.695989  418677 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1014 19:19:30.696030  418677 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1014 19:19:30.696076  418677 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1014 19:19:30.696124  418677 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1014 19:19:30.696201  418677 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1014 19:19:30.696257  418677 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1014 19:19:30.696331  418677 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1014 19:19:30.696394  418677 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1014 19:19:30.697912  418677 out.go:252]   - Booting up control plane ...
	I1014 19:19:30.697993  418677 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1014 19:19:30.698059  418677 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1014 19:19:30.698120  418677 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1014 19:19:30.698220  418677 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1014 19:19:30.698305  418677 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1014 19:19:30.698402  418677 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1014 19:19:30.698480  418677 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1014 19:19:30.698517  418677 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1014 19:19:30.698618  418677 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1014 19:19:30.698709  418677 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1014 19:19:30.698781  418677 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001912654s
	I1014 19:19:30.698881  418677 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1014 19:19:30.698979  418677 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1014 19:19:30.699082  418677 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1014 19:19:30.699155  418677 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1014 19:19:30.699229  418677 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000351135s
	I1014 19:19:30.699297  418677 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000565398s
	I1014 19:19:30.699362  418677 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000662204s
	I1014 19:19:30.699368  418677 kubeadm.go:318] 
	I1014 19:19:30.699455  418677 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1014 19:19:30.699535  418677 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1014 19:19:30.699614  418677 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1014 19:19:30.699700  418677 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1014 19:19:30.699776  418677 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1014 19:19:30.699855  418677 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1014 19:19:30.699900  418677 kubeadm.go:318] 
	W1014 19:19:30.700035  418677 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [addons-995790 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [addons-995790 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001912654s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000351135s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000565398s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000662204s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [addons-995790 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [addons-995790 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001912654s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000351135s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000565398s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000662204s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1014 19:19:30.700109  418677 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1014 19:19:31.147300  418677 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 19:19:31.161333  418677 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1014 19:19:31.161393  418677 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 19:19:31.170157  418677 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 19:19:31.170181  418677 kubeadm.go:157] found existing configuration files:
	
	I1014 19:19:31.170230  418677 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 19:19:31.179182  418677 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 19:19:31.179253  418677 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 19:19:31.187857  418677 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 19:19:31.195954  418677 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 19:19:31.196015  418677 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 19:19:31.203851  418677 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 19:19:31.211661  418677 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 19:19:31.211707  418677 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 19:19:31.219224  418677 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 19:19:31.226946  418677 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 19:19:31.227003  418677 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 19:19:31.234369  418677 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1014 19:19:31.293676  418677 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1014 19:19:31.354438  418677 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1014 19:23:34.602125  418677 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.49.2:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1014 19:23:34.602353  418677 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1014 19:23:34.605314  418677 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1014 19:23:34.605379  418677 kubeadm.go:318] [preflight] Running pre-flight checks
	I1014 19:23:34.605471  418677 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1014 19:23:34.605518  418677 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1014 19:23:34.605554  418677 kubeadm.go:318] OS: Linux
	I1014 19:23:34.605600  418677 kubeadm.go:318] CGROUPS_CPU: enabled
	I1014 19:23:34.605681  418677 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1014 19:23:34.605772  418677 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1014 19:23:34.605839  418677 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1014 19:23:34.605917  418677 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1014 19:23:34.605985  418677 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1014 19:23:34.606054  418677 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1014 19:23:34.606113  418677 kubeadm.go:318] CGROUPS_IO: enabled
	I1014 19:23:34.606211  418677 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1014 19:23:34.606370  418677 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1014 19:23:34.606519  418677 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1014 19:23:34.606591  418677 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1014 19:23:34.610561  418677 out.go:252]   - Generating certificates and keys ...
	I1014 19:23:34.610641  418677 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1014 19:23:34.610706  418677 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1014 19:23:34.610793  418677 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1014 19:23:34.610868  418677 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1014 19:23:34.610930  418677 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1014 19:23:34.610989  418677 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1014 19:23:34.611057  418677 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1014 19:23:34.611108  418677 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1014 19:23:34.611171  418677 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1014 19:23:34.611229  418677 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1014 19:23:34.611260  418677 kubeadm.go:318] [certs] Using the existing "sa" key
	I1014 19:23:34.611331  418677 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1014 19:23:34.611417  418677 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1014 19:23:34.611502  418677 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1014 19:23:34.611575  418677 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1014 19:23:34.611691  418677 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1014 19:23:34.611796  418677 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1014 19:23:34.611881  418677 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1014 19:23:34.611938  418677 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1014 19:23:34.615834  418677 out.go:252]   - Booting up control plane ...
	I1014 19:23:34.615927  418677 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1014 19:23:34.615999  418677 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1014 19:23:34.616053  418677 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1014 19:23:34.616141  418677 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1014 19:23:34.616223  418677 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1014 19:23:34.616305  418677 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1014 19:23:34.616375  418677 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1014 19:23:34.616410  418677 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1014 19:23:34.616578  418677 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1014 19:23:34.616723  418677 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1014 19:23:34.616787  418677 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501329137s
	I1014 19:23:34.616886  418677 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1014 19:23:34.616971  418677 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1014 19:23:34.617055  418677 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1014 19:23:34.617127  418677 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1014 19:23:34.617197  418677 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000175264s
	I1014 19:23:34.617269  418677 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000197689s
	I1014 19:23:34.617331  418677 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000651524s
	I1014 19:23:34.617340  418677 kubeadm.go:318] 
	I1014 19:23:34.617424  418677 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1014 19:23:34.617498  418677 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1014 19:23:34.617568  418677 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1014 19:23:34.617642  418677 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1014 19:23:34.617710  418677 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1014 19:23:34.617795  418677 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1014 19:23:34.617838  418677 kubeadm.go:318] 
	I1014 19:23:34.617883  418677 kubeadm.go:402] duration metric: took 8m8.058016144s to StartCluster
	I1014 19:23:34.617950  418677 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:23:34.618023  418677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:23:34.647116  418677 cri.go:89] found id: ""
	I1014 19:23:34.647160  418677 logs.go:282] 0 containers: []
	W1014 19:23:34.647172  418677 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:23:34.647182  418677 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:23:34.647255  418677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:23:34.673925  418677 cri.go:89] found id: ""
	I1014 19:23:34.673951  418677 logs.go:282] 0 containers: []
	W1014 19:23:34.673960  418677 logs.go:284] No container was found matching "etcd"
	I1014 19:23:34.673966  418677 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:23:34.674025  418677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:23:34.701391  418677 cri.go:89] found id: ""
	I1014 19:23:34.701417  418677 logs.go:282] 0 containers: []
	W1014 19:23:34.701425  418677 logs.go:284] No container was found matching "coredns"
	I1014 19:23:34.701430  418677 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:23:34.701502  418677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:23:34.728362  418677 cri.go:89] found id: ""
	I1014 19:23:34.728388  418677 logs.go:282] 0 containers: []
	W1014 19:23:34.728397  418677 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:23:34.728403  418677 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:23:34.728453  418677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:23:34.755212  418677 cri.go:89] found id: ""
	I1014 19:23:34.755236  418677 logs.go:282] 0 containers: []
	W1014 19:23:34.755243  418677 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:23:34.755249  418677 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:23:34.755300  418677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:23:34.781082  418677 cri.go:89] found id: ""
	I1014 19:23:34.781105  418677 logs.go:282] 0 containers: []
	W1014 19:23:34.781113  418677 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:23:34.781119  418677 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:23:34.781165  418677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:23:34.809238  418677 cri.go:89] found id: ""
	I1014 19:23:34.809262  418677 logs.go:282] 0 containers: []
	W1014 19:23:34.809272  418677 logs.go:284] No container was found matching "kindnet"
	I1014 19:23:34.809287  418677 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:23:34.809305  418677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:23:34.873736  418677 logs.go:123] Gathering logs for container status ...
	I1014 19:23:34.873796  418677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:23:34.904538  418677 logs.go:123] Gathering logs for kubelet ...
	I1014 19:23:34.904566  418677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:23:34.968544  418677 logs.go:123] Gathering logs for dmesg ...
	I1014 19:23:34.968582  418677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:23:34.986486  418677 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:23:34.986518  418677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:23:35.047524  418677 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:23:35.039994    2389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 19:23:35.040511    2389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 19:23:35.042125    2389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 19:23:35.042584    2389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 19:23:35.044164    2389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:23:35.039994    2389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 19:23:35.040511    2389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 19:23:35.042125    2389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 19:23:35.042584    2389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 19:23:35.044164    2389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1014 19:23:35.047550  418677 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501329137s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000175264s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000197689s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000651524s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.49.2:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1014 19:23:35.047601  418677 out.go:285] * 
	* 
	W1014 19:23:35.047719  418677 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501329137s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000175264s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000197689s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000651524s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.49.2:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501329137s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000175264s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000197689s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000651524s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.49.2:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1014 19:23:35.047737  418677 out.go:285] * 
	* 
	W1014 19:23:35.049388  418677 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 19:23:35.056001  418677 out.go:203] 
	W1014 19:23:35.057592  418677 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501329137s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000175264s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000197689s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000651524s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.49.2:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501329137s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000175264s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000197689s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000651524s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.49.2:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1014 19:23:35.057651  418677 out.go:285] * 
	* 
	I1014 19:23:35.060157  418677 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:110: out/minikube-linux-amd64 start -p addons-995790 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher failed: exit status 80
--- FAIL: TestAddons/Setup (521.01s)

                                                
                                    
x
+
TestErrorSpam/setup (498.66s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-442016 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-442016 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p nospam-442016 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-442016 --driver=docker  --container-runtime=crio: exit status 80 (8m18.649027554s)

                                                
                                                
-- stdout --
	* [nospam-442016] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21409
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21409-413763/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-413763/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "nospam-442016" primary control-plane node in "nospam-442016" cluster
	* Pulling base image v0.0.48-1759745255-21703 ...
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost nospam-442016] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost nospam-442016] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 502.862162ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000803419s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001156581s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000316394s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.49.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 502.067531ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000315563s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000483994s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000527618s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 502.067531ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000315563s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000483994s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000527618s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	* 

                                                
                                                
** /stderr **
error_spam_test.go:83: "out/minikube-linux-amd64 start -p nospam-442016 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-442016 --driver=docker  --container-runtime=crio" failed: exit status 80
error_spam_test.go:96: unexpected stderr: "! initialization failed, will try again: wait: sudo /bin/bash -c \"env PATH=\"/var/lib/minikube/binaries/v1.34.1:$PATH\" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables\": Process exited with status 1"
error_spam_test.go:96: unexpected stderr: "stdout:"
error_spam_test.go:96: unexpected stderr: "[init] Using Kubernetes version: v1.34.1"
error_spam_test.go:96: unexpected stderr: "[preflight] Running pre-flight checks"
error_spam_test.go:96: unexpected stderr: "[preflight] The system verification failed. Printing the output from the verification:"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mKERNEL_VERSION\x1b[0m: \x1b[0;32m6.8.0-1041-gcp\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mOS\x1b[0m: \x1b[0;32mLinux\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_CPU\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_CPUSET\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_DEVICES\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_FREEZER\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_MEMORY\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_PIDS\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_HUGETLB\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_IO\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "[preflight] Pulling images required for setting up a Kubernetes cluster"
error_spam_test.go:96: unexpected stderr: "[preflight] This might take a minute or two, depending on the speed of your internet connection"
error_spam_test.go:96: unexpected stderr: "[preflight] You can also perform this action beforehand using 'kubeadm config images pull'"
error_spam_test.go:96: unexpected stderr: "[certs] Using certificateDir folder \"/var/lib/minikube/certs\""
error_spam_test.go:96: unexpected stderr: "[certs] Using existing ca certificate authority"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing apiserver certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Generating \"apiserver-kubelet-client\" certificate and key"
error_spam_test.go:96: unexpected stderr: "[certs] Generating \"front-proxy-ca\" certificate and key"
error_spam_test.go:96: unexpected stderr: "[certs] Generating \"front-proxy-client\" certificate and key"
error_spam_test.go:96: unexpected stderr: "[certs] Generating \"etcd/ca\" certificate and key"
error_spam_test.go:96: unexpected stderr: "[certs] Generating \"etcd/server\" certificate and key"
error_spam_test.go:96: unexpected stderr: "[certs] etcd/server serving cert is signed for DNS names [localhost nospam-442016] and IPs [192.168.49.2 127.0.0.1 ::1]"
error_spam_test.go:96: unexpected stderr: "[certs] Generating \"etcd/peer\" certificate and key"
error_spam_test.go:96: unexpected stderr: "[certs] etcd/peer serving cert is signed for DNS names [localhost nospam-442016] and IPs [192.168.49.2 127.0.0.1 ::1]"
error_spam_test.go:96: unexpected stderr: "[certs] Generating \"etcd/healthcheck-client\" certificate and key"
error_spam_test.go:96: unexpected stderr: "[certs] Generating \"apiserver-etcd-client\" certificate and key"
error_spam_test.go:96: unexpected stderr: "[certs] Generating \"sa\" key and public key"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\""
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"admin.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"super-admin.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"kubelet.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"scheduler.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\""
error_spam_test.go:96: unexpected stderr: "[control-plane] Using manifest folder \"/etc/kubernetes/manifests\""
error_spam_test.go:96: unexpected stderr: "[control-plane] Creating static Pod manifest for \"kube-apiserver\""
error_spam_test.go:96: unexpected stderr: "[control-plane] Creating static Pod manifest for \"kube-controller-manager\""
error_spam_test.go:96: unexpected stderr: "[control-plane] Creating static Pod manifest for \"kube-scheduler\""
error_spam_test.go:96: unexpected stderr: "[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\""
error_spam_test.go:96: unexpected stderr: "[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/instance-config.yaml\""
error_spam_test.go:96: unexpected stderr: "[patches] Applied patch of type \"application/strategic-merge-patch+json\" to target \"kubeletconfiguration\""
error_spam_test.go:96: unexpected stderr: "[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\""
error_spam_test.go:96: unexpected stderr: "[kubelet-start] Starting the kubelet"
error_spam_test.go:96: unexpected stderr: "[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\""
error_spam_test.go:96: unexpected stderr: "[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s"
error_spam_test.go:96: unexpected stderr: "[kubelet-check] The kubelet is healthy after 502.862162ms"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] kube-apiserver is not healthy after 4m0.000803419s"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] kube-scheduler is not healthy after 4m0.001156581s"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] kube-controller-manager is not healthy after 4m0.000316394s"
error_spam_test.go:96: unexpected stderr: "A control plane component may have crashed or exited when started by the container runtime."
error_spam_test.go:96: unexpected stderr: "To troubleshoot, list all containers using your preferred container runtimes CLI."
error_spam_test.go:96: unexpected stderr: "Here is one example how you may list all running Kubernetes containers by using crictl:"
error_spam_test.go:96: unexpected stderr: "\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'"
error_spam_test.go:96: unexpected stderr: "\tOnce you have found the failing container, you can inspect its logs with:"
error_spam_test.go:96: unexpected stderr: "\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'"
error_spam_test.go:96: unexpected stderr: "stderr:"
error_spam_test.go:96: unexpected stderr: "\t[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: \"configs\", output: \"modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\\n\", err: exit status 1"
error_spam_test.go:96: unexpected stderr: "\t[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'"
error_spam_test.go:96: unexpected stderr: "error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get \"https://control-plane.minikube.internal:8443/livez?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get \"https://127.0.0.1:10257/healthz\": dial tcp 127.0.0.1:10257: connect: connection refused]"
error_spam_test.go:96: unexpected stderr: "To see the stack trace of this error execute with --v=5 or higher"
error_spam_test.go:96: unexpected stderr: "* "
error_spam_test.go:96: unexpected stderr: "X Error starting cluster: wait: sudo /bin/bash -c \"env PATH=\"/var/lib/minikube/binaries/v1.34.1:$PATH\" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables\": Process exited with status 1"
error_spam_test.go:96: unexpected stderr: "stdout:"
error_spam_test.go:96: unexpected stderr: "[init] Using Kubernetes version: v1.34.1"
error_spam_test.go:96: unexpected stderr: "[preflight] Running pre-flight checks"
error_spam_test.go:96: unexpected stderr: "[preflight] The system verification failed. Printing the output from the verification:"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mKERNEL_VERSION\x1b[0m: \x1b[0;32m6.8.0-1041-gcp\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mOS\x1b[0m: \x1b[0;32mLinux\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_CPU\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_CPUSET\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_DEVICES\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_FREEZER\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_MEMORY\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_PIDS\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_HUGETLB\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_IO\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "[preflight] Pulling images required for setting up a Kubernetes cluster"
error_spam_test.go:96: unexpected stderr: "[preflight] This might take a minute or two, depending on the speed of your internet connection"
error_spam_test.go:96: unexpected stderr: "[preflight] You can also perform this action beforehand using 'kubeadm config images pull'"
error_spam_test.go:96: unexpected stderr: "[certs] Using certificateDir folder \"/var/lib/minikube/certs\""
error_spam_test.go:96: unexpected stderr: "[certs] Using existing ca certificate authority"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing apiserver certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing apiserver-kubelet-client certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing front-proxy-ca certificate authority"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing front-proxy-client certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing etcd/ca certificate authority"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing etcd/server certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing etcd/peer certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing etcd/healthcheck-client certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing apiserver-etcd-client certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using the existing \"sa\" key"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\""
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"admin.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"super-admin.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"kubelet.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"scheduler.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\""
error_spam_test.go:96: unexpected stderr: "[control-plane] Using manifest folder \"/etc/kubernetes/manifests\""
error_spam_test.go:96: unexpected stderr: "[control-plane] Creating static Pod manifest for \"kube-apiserver\""
error_spam_test.go:96: unexpected stderr: "[control-plane] Creating static Pod manifest for \"kube-controller-manager\""
error_spam_test.go:96: unexpected stderr: "[control-plane] Creating static Pod manifest for \"kube-scheduler\""
error_spam_test.go:96: unexpected stderr: "[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\""
error_spam_test.go:96: unexpected stderr: "[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/instance-config.yaml\""
error_spam_test.go:96: unexpected stderr: "[patches] Applied patch of type \"application/strategic-merge-patch+json\" to target \"kubeletconfiguration\""
error_spam_test.go:96: unexpected stderr: "[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\""
error_spam_test.go:96: unexpected stderr: "[kubelet-start] Starting the kubelet"
error_spam_test.go:96: unexpected stderr: "[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\""
error_spam_test.go:96: unexpected stderr: "[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s"
error_spam_test.go:96: unexpected stderr: "[kubelet-check] The kubelet is healthy after 502.067531ms"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] kube-scheduler is not healthy after 4m0.000315563s"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] kube-apiserver is not healthy after 4m0.000483994s"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] kube-controller-manager is not healthy after 4m0.000527618s"
error_spam_test.go:96: unexpected stderr: "A control plane component may have crashed or exited when started by the container runtime."
error_spam_test.go:96: unexpected stderr: "To troubleshoot, list all containers using your preferred container runtimes CLI."
error_spam_test.go:96: unexpected stderr: "Here is one example how you may list all running Kubernetes containers by using crictl:"
error_spam_test.go:96: unexpected stderr: "\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'"
error_spam_test.go:96: unexpected stderr: "\tOnce you have found the failing container, you can inspect its logs with:"
error_spam_test.go:96: unexpected stderr: "\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'"
error_spam_test.go:96: unexpected stderr: "stderr:"
error_spam_test.go:96: unexpected stderr: "\t[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: \"configs\", output: \"modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\\n\", err: exit status 1"
error_spam_test.go:96: unexpected stderr: "\t[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'"
error_spam_test.go:96: unexpected stderr: "error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get \"https://127.0.0.1:10257/healthz\": dial tcp 127.0.0.1:10257: connect: connection refused]"
error_spam_test.go:96: unexpected stderr: "To see the stack trace of this error execute with --v=5 or higher"
error_spam_test.go:96: unexpected stderr: "* "
error_spam_test.go:96: unexpected stderr: "╭─────────────────────────────────────────────────────────────────────────────────────────────╮"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * If the above advice does not help, please let us know:                                 │"
error_spam_test.go:96: unexpected stderr: "│      https://github.com/kubernetes/minikube/issues/new/choose                               │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "╰─────────────────────────────────────────────────────────────────────────────────────────────╯"
error_spam_test.go:96: unexpected stderr: "X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c \"env PATH=\"/var/lib/minikube/binaries/v1.34.1:$PATH\" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables\": Process exited with status 1"
error_spam_test.go:96: unexpected stderr: "stdout:"
error_spam_test.go:96: unexpected stderr: "[init] Using Kubernetes version: v1.34.1"
error_spam_test.go:96: unexpected stderr: "[preflight] Running pre-flight checks"
error_spam_test.go:96: unexpected stderr: "[preflight] The system verification failed. Printing the output from the verification:"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mKERNEL_VERSION\x1b[0m: \x1b[0;32m6.8.0-1041-gcp\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mOS\x1b[0m: \x1b[0;32mLinux\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_CPU\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_CPUSET\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_DEVICES\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_FREEZER\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_MEMORY\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_PIDS\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_HUGETLB\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_IO\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "[preflight] Pulling images required for setting up a Kubernetes cluster"
error_spam_test.go:96: unexpected stderr: "[preflight] This might take a minute or two, depending on the speed of your internet connection"
error_spam_test.go:96: unexpected stderr: "[preflight] You can also perform this action beforehand using 'kubeadm config images pull'"
error_spam_test.go:96: unexpected stderr: "[certs] Using certificateDir folder \"/var/lib/minikube/certs\""
error_spam_test.go:96: unexpected stderr: "[certs] Using existing ca certificate authority"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing apiserver certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing apiserver-kubelet-client certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing front-proxy-ca certificate authority"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing front-proxy-client certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing etcd/ca certificate authority"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing etcd/server certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing etcd/peer certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing etcd/healthcheck-client certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing apiserver-etcd-client certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using the existing \"sa\" key"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\""
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"admin.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"super-admin.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"kubelet.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"scheduler.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\""
error_spam_test.go:96: unexpected stderr: "[control-plane] Using manifest folder \"/etc/kubernetes/manifests\""
error_spam_test.go:96: unexpected stderr: "[control-plane] Creating static Pod manifest for \"kube-apiserver\""
error_spam_test.go:96: unexpected stderr: "[control-plane] Creating static Pod manifest for \"kube-controller-manager\""
error_spam_test.go:96: unexpected stderr: "[control-plane] Creating static Pod manifest for \"kube-scheduler\""
error_spam_test.go:96: unexpected stderr: "[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\""
error_spam_test.go:96: unexpected stderr: "[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/instance-config.yaml\""
error_spam_test.go:96: unexpected stderr: "[patches] Applied patch of type \"application/strategic-merge-patch+json\" to target \"kubeletconfiguration\""
error_spam_test.go:96: unexpected stderr: "[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\""
error_spam_test.go:96: unexpected stderr: "[kubelet-start] Starting the kubelet"
error_spam_test.go:96: unexpected stderr: "[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\""
error_spam_test.go:96: unexpected stderr: "[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s"
error_spam_test.go:96: unexpected stderr: "[kubelet-check] The kubelet is healthy after 502.067531ms"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] kube-scheduler is not healthy after 4m0.000315563s"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] kube-apiserver is not healthy after 4m0.000483994s"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] kube-controller-manager is not healthy after 4m0.000527618s"
error_spam_test.go:96: unexpected stderr: "A control plane component may have crashed or exited when started by the container runtime."
error_spam_test.go:96: unexpected stderr: "To troubleshoot, list all containers using your preferred container runtimes CLI."
error_spam_test.go:96: unexpected stderr: "Here is one example how you may list all running Kubernetes containers by using crictl:"
error_spam_test.go:96: unexpected stderr: "\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'"
error_spam_test.go:96: unexpected stderr: "\tOnce you have found the failing container, you can inspect its logs with:"
error_spam_test.go:96: unexpected stderr: "\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'"
error_spam_test.go:96: unexpected stderr: "stderr:"
error_spam_test.go:96: unexpected stderr: "\t[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: \"configs\", output: \"modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\\n\", err: exit status 1"
error_spam_test.go:96: unexpected stderr: "\t[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'"
error_spam_test.go:96: unexpected stderr: "error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get \"https://127.0.0.1:10257/healthz\": dial tcp 127.0.0.1:10257: connect: connection refused]"
error_spam_test.go:96: unexpected stderr: "To see the stack trace of this error execute with --v=5 or higher"
error_spam_test.go:96: unexpected stderr: "* "
error_spam_test.go:110: minikube stdout:
* [nospam-442016] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
- MINIKUBE_LOCATION=21409
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/21409-413763/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-413763/.minikube
- MINIKUBE_BIN=out/minikube-linux-amd64
- MINIKUBE_FORCE_SYSTEMD=
* Using the docker driver based on user configuration
* Using Docker driver with root privileges
* Starting "nospam-442016" primary control-plane node in "nospam-442016" cluster
* Pulling base image v0.0.48-1759745255-21703 ...
* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...

                                                
                                                

                                                
                                                
error_spam_test.go:111: minikube stderr:
! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.34.1
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 6.8.0-1041-gcp
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
CGROUPS_PIDS: enabled
CGROUPS_HUGETLB: enabled
CGROUPS_IO: enabled
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost nospam-442016] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost nospam-442016] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 502.862162ms
[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
[control-plane-check] kube-apiserver is not healthy after 4m0.000803419s
[control-plane-check] kube-scheduler is not healthy after 4m0.001156581s
[control-plane-check] kube-controller-manager is not healthy after 4m0.000316394s

                                                
                                                
A control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	Once you have found the failing container, you can inspect its logs with:
	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'

                                                
                                                

                                                
                                                
stderr:
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.49.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
To see the stack trace of this error execute with --v=5 or higher

                                                
                                                
* 
X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.34.1
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 6.8.0-1041-gcp
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
CGROUPS_PIDS: enabled
CGROUPS_HUGETLB: enabled
CGROUPS_IO: enabled
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 502.067531ms
[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
[control-plane-check] kube-scheduler is not healthy after 4m0.000315563s
[control-plane-check] kube-apiserver is not healthy after 4m0.000483994s
[control-plane-check] kube-controller-manager is not healthy after 4m0.000527618s

                                                
                                                
A control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	Once you have found the failing container, you can inspect its logs with:
	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'

                                                
                                                

                                                
                                                
stderr:
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
To see the stack trace of this error execute with --v=5 or higher

                                                
                                                
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.34.1
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 6.8.0-1041-gcp
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
CGROUPS_PIDS: enabled
CGROUPS_HUGETLB: enabled
CGROUPS_IO: enabled
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 502.067531ms
[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
[control-plane-check] kube-scheduler is not healthy after 4m0.000315563s
[control-plane-check] kube-apiserver is not healthy after 4m0.000483994s
[control-plane-check] kube-controller-manager is not healthy after 4m0.000527618s

                                                
                                                
A control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	Once you have found the failing container, you can inspect its logs with:
	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'

                                                
                                                

                                                
                                                
stderr:
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
To see the stack trace of this error execute with --v=5 or higher

                                                
                                                
* 
--- FAIL: TestErrorSpam/setup (498.66s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (503.57s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-744288 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2239: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-744288 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: exit status 80 (8m22.228397324s)

                                                
                                                
-- stdout --
	* [functional-744288] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21409
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21409-413763/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-413763/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "functional-744288" primary control-plane node in "functional-744288" cluster
	* Pulling base image v0.0.48-1759745255-21703 ...
	* Found network options:
	  - HTTP_PROXY=localhost:37091
	* Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:37091 to docker env.
	! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [functional-744288 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [functional-744288 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.000964072s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000354821s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000396646s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000782501s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001067025s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000245486s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000566773s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000489507s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001067025s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000245486s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000566773s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000489507s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	* 

                                                
                                                
** /stderr **
functional_test.go:2241: failed minikube start. args "out/minikube-linux-amd64 start -p functional-744288 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/serial/StartWithProxy]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/serial/StartWithProxy]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-744288
helpers_test.go:243: (dbg) docker inspect functional-744288:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ca55a7aa3ac1a055c5c96669019f558795a313505e680734901ed4465642a23a",
	        "Created": "2025-10-14T19:32:11.700856501Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 432350,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-14T19:32:11.736879267Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/ca55a7aa3ac1a055c5c96669019f558795a313505e680734901ed4465642a23a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ca55a7aa3ac1a055c5c96669019f558795a313505e680734901ed4465642a23a/hostname",
	        "HostsPath": "/var/lib/docker/containers/ca55a7aa3ac1a055c5c96669019f558795a313505e680734901ed4465642a23a/hosts",
	        "LogPath": "/var/lib/docker/containers/ca55a7aa3ac1a055c5c96669019f558795a313505e680734901ed4465642a23a/ca55a7aa3ac1a055c5c96669019f558795a313505e680734901ed4465642a23a-json.log",
	        "Name": "/functional-744288",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-744288:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-744288",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ca55a7aa3ac1a055c5c96669019f558795a313505e680734901ed4465642a23a",
	                "LowerDir": "/var/lib/docker/overlay2/7474e2653fad8f96d0b2edb2947dba52f432e8f6324d88224718eafd377e5e8b-init/diff:/var/lib/docker/overlay2/51cca15559205c73df9571f03495ed8a29f085405673af5fef2c1ba7c695d8af/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7474e2653fad8f96d0b2edb2947dba52f432e8f6324d88224718eafd377e5e8b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7474e2653fad8f96d0b2edb2947dba52f432e8f6324d88224718eafd377e5e8b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7474e2653fad8f96d0b2edb2947dba52f432e8f6324d88224718eafd377e5e8b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-744288",
	                "Source": "/var/lib/docker/volumes/functional-744288/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-744288",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-744288",
	                "name.minikube.sigs.k8s.io": "functional-744288",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "697bcb3f8caffcc21274484944886a08e42751728789c447339536a0c178eab6",
	            "SandboxKey": "/var/run/docker/netns/697bcb3f8caf",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32898"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32899"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32902"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32900"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32901"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-744288": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b6:cd:04:2d:45:3c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a03a168f8787c5e4cc68b4fb2601645260a265eddce7ef101976f25cd32bdabb",
	                    "EndpointID": "c7c49faacc3b14ce4dd3f84ad70993167a6a8b1490739ae12d7ca1980d176ee7",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-744288",
	                        "ca55a7aa3ac1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-744288 -n functional-744288
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-744288 -n functional-744288: exit status 6 (320.275775ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1014 19:40:28.939661  436817 status.go:458] kubeconfig endpoint: get endpoint: "functional-744288" does not appear in /home/jenkins/minikube-integration/21409-413763/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestFunctional/serial/StartWithProxy FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/serial/StartWithProxy]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-744288 logs -n 25
helpers_test.go:260: TestFunctional/serial/StartWithProxy logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-667039                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-667039   │ jenkins │ v1.37.0 │ 14 Oct 25 19:14 UTC │ 14 Oct 25 19:14 UTC │
	│ delete  │ -p download-only-102449                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-102449   │ jenkins │ v1.37.0 │ 14 Oct 25 19:14 UTC │ 14 Oct 25 19:14 UTC │
	│ start   │ --download-only -p download-docker-042272 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-042272 │ jenkins │ v1.37.0 │ 14 Oct 25 19:14 UTC │                     │
	│ delete  │ -p download-docker-042272                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-042272 │ jenkins │ v1.37.0 │ 14 Oct 25 19:14 UTC │ 14 Oct 25 19:14 UTC │
	│ start   │ --download-only -p binary-mirror-194366 --alsologtostderr --binary-mirror http://127.0.0.1:45401 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-194366   │ jenkins │ v1.37.0 │ 14 Oct 25 19:14 UTC │                     │
	│ delete  │ -p binary-mirror-194366                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-194366   │ jenkins │ v1.37.0 │ 14 Oct 25 19:14 UTC │ 14 Oct 25 19:14 UTC │
	│ addons  │ enable dashboard -p addons-995790                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-995790          │ jenkins │ v1.37.0 │ 14 Oct 25 19:14 UTC │                     │
	│ addons  │ disable dashboard -p addons-995790                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-995790          │ jenkins │ v1.37.0 │ 14 Oct 25 19:14 UTC │                     │
	│ start   │ -p addons-995790 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-995790          │ jenkins │ v1.37.0 │ 14 Oct 25 19:14 UTC │                     │
	│ delete  │ -p addons-995790                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-995790          │ jenkins │ v1.37.0 │ 14 Oct 25 19:23 UTC │ 14 Oct 25 19:23 UTC │
	│ start   │ -p nospam-442016 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-442016 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                  │ nospam-442016          │ jenkins │ v1.37.0 │ 14 Oct 25 19:23 UTC │                     │
	│ start   │ nospam-442016 --log_dir /tmp/nospam-442016 start --dry-run                                                                                                                                                                                                                                                                                                                                                                                                               │ nospam-442016          │ jenkins │ v1.37.0 │ 14 Oct 25 19:31 UTC │                     │
	│ start   │ nospam-442016 --log_dir /tmp/nospam-442016 start --dry-run                                                                                                                                                                                                                                                                                                                                                                                                               │ nospam-442016          │ jenkins │ v1.37.0 │ 14 Oct 25 19:31 UTC │                     │
	│ start   │ nospam-442016 --log_dir /tmp/nospam-442016 start --dry-run                                                                                                                                                                                                                                                                                                                                                                                                               │ nospam-442016          │ jenkins │ v1.37.0 │ 14 Oct 25 19:31 UTC │                     │
	│ pause   │ nospam-442016 --log_dir /tmp/nospam-442016 pause                                                                                                                                                                                                                                                                                                                                                                                                                         │ nospam-442016          │ jenkins │ v1.37.0 │ 14 Oct 25 19:31 UTC │ 14 Oct 25 19:31 UTC │
	│ pause   │ nospam-442016 --log_dir /tmp/nospam-442016 pause                                                                                                                                                                                                                                                                                                                                                                                                                         │ nospam-442016          │ jenkins │ v1.37.0 │ 14 Oct 25 19:31 UTC │ 14 Oct 25 19:31 UTC │
	│ pause   │ nospam-442016 --log_dir /tmp/nospam-442016 pause                                                                                                                                                                                                                                                                                                                                                                                                                         │ nospam-442016          │ jenkins │ v1.37.0 │ 14 Oct 25 19:31 UTC │ 14 Oct 25 19:31 UTC │
	│ unpause │ nospam-442016 --log_dir /tmp/nospam-442016 unpause                                                                                                                                                                                                                                                                                                                                                                                                                       │ nospam-442016          │ jenkins │ v1.37.0 │ 14 Oct 25 19:31 UTC │ 14 Oct 25 19:31 UTC │
	│ unpause │ nospam-442016 --log_dir /tmp/nospam-442016 unpause                                                                                                                                                                                                                                                                                                                                                                                                                       │ nospam-442016          │ jenkins │ v1.37.0 │ 14 Oct 25 19:31 UTC │ 14 Oct 25 19:31 UTC │
	│ unpause │ nospam-442016 --log_dir /tmp/nospam-442016 unpause                                                                                                                                                                                                                                                                                                                                                                                                                       │ nospam-442016          │ jenkins │ v1.37.0 │ 14 Oct 25 19:31 UTC │ 14 Oct 25 19:32 UTC │
	│ stop    │ nospam-442016 --log_dir /tmp/nospam-442016 stop                                                                                                                                                                                                                                                                                                                                                                                                                          │ nospam-442016          │ jenkins │ v1.37.0 │ 14 Oct 25 19:32 UTC │ 14 Oct 25 19:32 UTC │
	│ stop    │ nospam-442016 --log_dir /tmp/nospam-442016 stop                                                                                                                                                                                                                                                                                                                                                                                                                          │ nospam-442016          │ jenkins │ v1.37.0 │ 14 Oct 25 19:32 UTC │ 14 Oct 25 19:32 UTC │
	│ stop    │ nospam-442016 --log_dir /tmp/nospam-442016 stop                                                                                                                                                                                                                                                                                                                                                                                                                          │ nospam-442016          │ jenkins │ v1.37.0 │ 14 Oct 25 19:32 UTC │ 14 Oct 25 19:32 UTC │
	│ delete  │ -p nospam-442016                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ nospam-442016          │ jenkins │ v1.37.0 │ 14 Oct 25 19:32 UTC │ 14 Oct 25 19:32 UTC │
	│ start   │ -p functional-744288 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                            │ functional-744288      │ jenkins │ v1.37.0 │ 14 Oct 25 19:32 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/14 19:32:06
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1014 19:32:06.428042  431785 out.go:360] Setting OutFile to fd 1 ...
	I1014 19:32:06.428294  431785 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 19:32:06.428298  431785 out.go:374] Setting ErrFile to fd 2...
	I1014 19:32:06.428301  431785 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 19:32:06.428481  431785 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-413763/.minikube/bin
	I1014 19:32:06.428977  431785 out.go:368] Setting JSON to false
	I1014 19:32:06.429864  431785 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":8072,"bootTime":1760462254,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1014 19:32:06.429956  431785 start.go:141] virtualization: kvm guest
	I1014 19:32:06.432655  431785 out.go:179] * [functional-744288] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1014 19:32:06.434138  431785 notify.go:220] Checking for updates...
	I1014 19:32:06.434150  431785 out.go:179]   - MINIKUBE_LOCATION=21409
	I1014 19:32:06.435715  431785 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 19:32:06.437153  431785 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-413763/kubeconfig
	I1014 19:32:06.438782  431785 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-413763/.minikube
	I1014 19:32:06.440372  431785 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1014 19:32:06.442136  431785 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 19:32:06.443600  431785 driver.go:421] Setting default libvirt URI to qemu:///system
	I1014 19:32:06.467938  431785 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1014 19:32:06.468038  431785 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 19:32:06.534578  431785 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-14 19:32:06.523496216 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1014 19:32:06.534715  431785 docker.go:318] overlay module found
	I1014 19:32:06.537862  431785 out.go:179] * Using the docker driver based on user configuration
	I1014 19:32:06.539281  431785 start.go:305] selected driver: docker
	I1014 19:32:06.539291  431785 start.go:925] validating driver "docker" against <nil>
	I1014 19:32:06.539305  431785 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 19:32:06.540111  431785 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 19:32:06.604453  431785 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-14 19:32:06.593048937 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1014 19:32:06.604611  431785 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1014 19:32:06.604861  431785 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 19:32:06.606889  431785 out.go:179] * Using Docker driver with root privileges
	I1014 19:32:06.608122  431785 cni.go:84] Creating CNI manager for ""
	I1014 19:32:06.608162  431785 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1014 19:32:06.608168  431785 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1014 19:32:06.608250  431785 start.go:349] cluster config:
	{Name:functional-744288 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-744288 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID
:0 GPUs: AutoPauseInterval:1m0s}
	I1014 19:32:06.609792  431785 out.go:179] * Starting "functional-744288" primary control-plane node in "functional-744288" cluster
	I1014 19:32:06.611050  431785 cache.go:123] Beginning downloading kic base image for docker with crio
	I1014 19:32:06.612433  431785 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1014 19:32:06.613912  431785 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 19:32:06.613952  431785 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-413763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1014 19:32:06.613959  431785 cache.go:58] Caching tarball of preloaded images
	I1014 19:32:06.614061  431785 preload.go:233] Found /home/jenkins/minikube-integration/21409-413763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1014 19:32:06.614049  431785 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1014 19:32:06.614070  431785 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1014 19:32:06.614362  431785 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/config.json ...
	I1014 19:32:06.614379  431785 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/config.json: {Name:mk9bd66ec812a5c6e8ff56fe9dfc507f4794d7bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 19:32:06.636240  431785 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1014 19:32:06.636255  431785 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1014 19:32:06.640891  431785 cache.go:232] Successfully downloaded all kic artifacts
	I1014 19:32:06.640928  431785 start.go:360] acquireMachinesLock for functional-744288: {Name:mk27c3a9a4edec1c99a109c410361619ff35ec14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 19:32:06.641010  431785 start.go:364] duration metric: took 62.87µs to acquireMachinesLock for "functional-744288"
	I1014 19:32:06.641043  431785 start.go:93] Provisioning new machine with config: &{Name:functional-744288 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-744288 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 19:32:06.641105  431785 start.go:125] createHost starting for "" (driver="docker")
	I1014 19:32:06.643915  431785 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	W1014 19:32:06.644193  431785 out.go:285] ! Local proxy ignored: not passing HTTP_PROXY=localhost:37091 to docker env.
	I1014 19:32:06.644219  431785 start.go:159] libmachine.API.Create for "functional-744288" (driver="docker")
	I1014 19:32:06.644236  431785 client.go:168] LocalClient.Create starting
	I1014 19:32:06.644320  431785 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem
	I1014 19:32:06.644349  431785 main.go:141] libmachine: Decoding PEM data...
	I1014 19:32:06.644360  431785 main.go:141] libmachine: Parsing certificate...
	I1014 19:32:06.644414  431785 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem
	I1014 19:32:06.644427  431785 main.go:141] libmachine: Decoding PEM data...
	I1014 19:32:06.644434  431785 main.go:141] libmachine: Parsing certificate...
	I1014 19:32:06.645256  431785 cli_runner.go:164] Run: docker network inspect functional-744288 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1014 19:32:06.662780  431785 cli_runner.go:211] docker network inspect functional-744288 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1014 19:32:06.662847  431785 network_create.go:284] running [docker network inspect functional-744288] to gather additional debugging logs...
	I1014 19:32:06.662859  431785 cli_runner.go:164] Run: docker network inspect functional-744288
	W1014 19:32:06.679695  431785 cli_runner.go:211] docker network inspect functional-744288 returned with exit code 1
	I1014 19:32:06.679717  431785 network_create.go:287] error running [docker network inspect functional-744288]: docker network inspect functional-744288: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network functional-744288 not found
	I1014 19:32:06.679726  431785 network_create.go:289] output of [docker network inspect functional-744288]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network functional-744288 not found
	
	** /stderr **
	I1014 19:32:06.679859  431785 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1014 19:32:06.697498  431785 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001c03b50}
	I1014 19:32:06.697528  431785 network_create.go:124] attempt to create docker network functional-744288 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1014 19:32:06.697576  431785 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-744288 functional-744288
	I1014 19:32:06.755775  431785 network_create.go:108] docker network functional-744288 192.168.49.0/24 created
	I1014 19:32:06.755800  431785 kic.go:121] calculated static IP "192.168.49.2" for the "functional-744288" container
	I1014 19:32:06.755864  431785 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1014 19:32:06.772572  431785 cli_runner.go:164] Run: docker volume create functional-744288 --label name.minikube.sigs.k8s.io=functional-744288 --label created_by.minikube.sigs.k8s.io=true
	I1014 19:32:06.792010  431785 oci.go:103] Successfully created a docker volume functional-744288
	I1014 19:32:06.792091  431785 cli_runner.go:164] Run: docker run --rm --name functional-744288-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-744288 --entrypoint /usr/bin/test -v functional-744288:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1014 19:32:07.202314  431785 oci.go:107] Successfully prepared a docker volume functional-744288
	I1014 19:32:07.202361  431785 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 19:32:07.202384  431785 kic.go:194] Starting extracting preloaded images to volume ...
	I1014 19:32:07.202445  431785 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21409-413763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v functional-744288:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	I1014 19:32:11.627615  431785 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21409-413763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v functional-744288:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (4.425128263s)
	I1014 19:32:11.627649  431785 kic.go:203] duration metric: took 4.425260648s to extract preloaded images to volume ...
	W1014 19:32:11.627747  431785 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1014 19:32:11.627819  431785 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1014 19:32:11.627861  431785 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1014 19:32:11.684778  431785 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname functional-744288 --name functional-744288 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-744288 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=functional-744288 --network functional-744288 --ip 192.168.49.2 --volume functional-744288:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8441 --publish=127.0.0.1::8441 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1014 19:32:11.963143  431785 cli_runner.go:164] Run: docker container inspect functional-744288 --format={{.State.Running}}
	I1014 19:32:11.981316  431785 cli_runner.go:164] Run: docker container inspect functional-744288 --format={{.State.Status}}
	I1014 19:32:12.000135  431785 cli_runner.go:164] Run: docker exec functional-744288 stat /var/lib/dpkg/alternatives/iptables
	I1014 19:32:12.045919  431785 oci.go:144] the created container "functional-744288" has a running status.
	I1014 19:32:12.045949  431785 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21409-413763/.minikube/machines/functional-744288/id_rsa...
	I1014 19:32:12.144013  431785 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21409-413763/.minikube/machines/functional-744288/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1014 19:32:12.172729  431785 cli_runner.go:164] Run: docker container inspect functional-744288 --format={{.State.Status}}
	I1014 19:32:12.193443  431785 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1014 19:32:12.193458  431785 kic_runner.go:114] Args: [docker exec --privileged functional-744288 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1014 19:32:12.236273  431785 cli_runner.go:164] Run: docker container inspect functional-744288 --format={{.State.Status}}
	I1014 19:32:12.258039  431785 machine.go:93] provisionDockerMachine start ...
	I1014 19:32:12.258142  431785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-744288
	I1014 19:32:12.285029  431785 main.go:141] libmachine: Using SSH client type: native
	I1014 19:32:12.285361  431785 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32898 <nil> <nil>}
	I1014 19:32:12.285372  431785 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 19:32:12.286287  431785 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45372->127.0.0.1:32898: read: connection reset by peer
	I1014 19:32:15.435107  431785 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-744288
	
	I1014 19:32:15.435132  431785 ubuntu.go:182] provisioning hostname "functional-744288"
	I1014 19:32:15.435197  431785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-744288
	I1014 19:32:15.452707  431785 main.go:141] libmachine: Using SSH client type: native
	I1014 19:32:15.453006  431785 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32898 <nil> <nil>}
	I1014 19:32:15.453018  431785 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-744288 && echo "functional-744288" | sudo tee /etc/hostname
	I1014 19:32:15.609284  431785 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-744288
	
	I1014 19:32:15.609360  431785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-744288
	I1014 19:32:15.627579  431785 main.go:141] libmachine: Using SSH client type: native
	I1014 19:32:15.627806  431785 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32898 <nil> <nil>}
	I1014 19:32:15.627818  431785 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-744288' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-744288/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-744288' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 19:32:15.774893  431785 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 19:32:15.774917  431785 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-413763/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-413763/.minikube}
	I1014 19:32:15.774934  431785 ubuntu.go:190] setting up certificates
	I1014 19:32:15.774945  431785 provision.go:84] configureAuth start
	I1014 19:32:15.775003  431785 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-744288
	I1014 19:32:15.793159  431785 provision.go:143] copyHostCerts
	I1014 19:32:15.793212  431785 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem, removing ...
	I1014 19:32:15.793221  431785 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem
	I1014 19:32:15.793301  431785 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem (1078 bytes)
	I1014 19:32:15.793385  431785 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem, removing ...
	I1014 19:32:15.793388  431785 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem
	I1014 19:32:15.793411  431785 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem (1123 bytes)
	I1014 19:32:15.793461  431785 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem, removing ...
	I1014 19:32:15.793464  431785 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem
	I1014 19:32:15.793486  431785 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem (1675 bytes)
	I1014 19:32:15.793533  431785 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca-key.pem org=jenkins.functional-744288 san=[127.0.0.1 192.168.49.2 functional-744288 localhost minikube]
	I1014 19:32:16.136452  431785 provision.go:177] copyRemoteCerts
	I1014 19:32:16.136505  431785 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 19:32:16.136542  431785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-744288
	I1014 19:32:16.154498  431785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/functional-744288/id_rsa Username:docker}
	I1014 19:32:16.258818  431785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 19:32:16.279989  431785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1014 19:32:16.298691  431785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1014 19:32:16.317295  431785 provision.go:87] duration metric: took 542.333394ms to configureAuth
	I1014 19:32:16.317323  431785 ubuntu.go:206] setting minikube options for container-runtime
	I1014 19:32:16.317503  431785 config.go:182] Loaded profile config "functional-744288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 19:32:16.317599  431785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-744288
	I1014 19:32:16.335536  431785 main.go:141] libmachine: Using SSH client type: native
	I1014 19:32:16.335822  431785 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32898 <nil> <nil>}
	I1014 19:32:16.335836  431785 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 19:32:16.596003  431785 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 19:32:16.596026  431785 machine.go:96] duration metric: took 4.337963907s to provisionDockerMachine
	I1014 19:32:16.596035  431785 client.go:171] duration metric: took 9.951794718s to LocalClient.Create
	I1014 19:32:16.596054  431785 start.go:167] duration metric: took 9.951837128s to libmachine.API.Create "functional-744288"
	I1014 19:32:16.596060  431785 start.go:293] postStartSetup for "functional-744288" (driver="docker")
	I1014 19:32:16.596069  431785 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 19:32:16.596122  431785 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 19:32:16.596155  431785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-744288
	I1014 19:32:16.613347  431785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/functional-744288/id_rsa Username:docker}
	I1014 19:32:16.718205  431785 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 19:32:16.721947  431785 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1014 19:32:16.721966  431785 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1014 19:32:16.721976  431785 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-413763/.minikube/addons for local assets ...
	I1014 19:32:16.722029  431785 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-413763/.minikube/files for local assets ...
	I1014 19:32:16.722126  431785 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem -> 4173732.pem in /etc/ssl/certs
	I1014 19:32:16.722207  431785 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/test/nested/copy/417373/hosts -> hosts in /etc/test/nested/copy/417373
	I1014 19:32:16.722242  431785 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/417373
	I1014 19:32:16.730297  431785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem --> /etc/ssl/certs/4173732.pem (1708 bytes)
	I1014 19:32:16.751528  431785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/test/nested/copy/417373/hosts --> /etc/test/nested/copy/417373/hosts (40 bytes)
	I1014 19:32:16.769004  431785 start.go:296] duration metric: took 172.929237ms for postStartSetup
	I1014 19:32:16.769325  431785 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-744288
	I1014 19:32:16.786283  431785 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/config.json ...
	I1014 19:32:16.786532  431785 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1014 19:32:16.786566  431785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-744288
	I1014 19:32:16.803972  431785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/functional-744288/id_rsa Username:docker}
	I1014 19:32:16.904255  431785 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1014 19:32:16.908947  431785 start.go:128] duration metric: took 10.267826349s to createHost
	I1014 19:32:16.908967  431785 start.go:83] releasing machines lock for "functional-744288", held for 10.267948226s
	I1014 19:32:16.909047  431785 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-744288
	I1014 19:32:16.930240  431785 out.go:179] * Found network options:
	I1014 19:32:16.931507  431785 out.go:179]   - HTTP_PROXY=localhost:37091
	W1014 19:32:16.932801  431785 out.go:285] ! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
	I1014 19:32:16.934082  431785 out.go:179] * Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	I1014 19:32:16.935192  431785 ssh_runner.go:195] Run: cat /version.json
	I1014 19:32:16.935239  431785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-744288
	I1014 19:32:16.935277  431785 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 19:32:16.935326  431785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-744288
	I1014 19:32:16.953650  431785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/functional-744288/id_rsa Username:docker}
	I1014 19:32:16.953964  431785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/functional-744288/id_rsa Username:docker}
	I1014 19:32:17.105314  431785 ssh_runner.go:195] Run: systemctl --version
	I1014 19:32:17.111790  431785 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 19:32:17.147231  431785 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 19:32:17.152127  431785 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 19:32:17.152183  431785 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 19:32:17.178615  431785 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1014 19:32:17.178634  431785 start.go:495] detecting cgroup driver to use...
	I1014 19:32:17.178667  431785 detect.go:190] detected "systemd" cgroup driver on host os
	I1014 19:32:17.178749  431785 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 19:32:17.195620  431785 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 19:32:17.208379  431785 docker.go:218] disabling cri-docker service (if available) ...
	I1014 19:32:17.208435  431785 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 19:32:17.225519  431785 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 19:32:17.242721  431785 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 19:32:17.324352  431785 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 19:32:17.410558  431785 docker.go:234] disabling docker service ...
	I1014 19:32:17.410618  431785 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 19:32:17.430105  431785 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 19:32:17.442884  431785 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 19:32:17.526420  431785 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 19:32:17.607993  431785 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 19:32:17.620964  431785 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 19:32:17.635899  431785 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1014 19:32:17.635950  431785 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 19:32:17.646848  431785 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1014 19:32:17.646906  431785 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 19:32:17.656212  431785 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 19:32:17.665546  431785 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 19:32:17.674901  431785 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 19:32:17.683463  431785 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 19:32:17.692786  431785 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 19:32:17.706826  431785 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 19:32:17.715618  431785 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 19:32:17.723081  431785 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 19:32:17.730482  431785 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 19:32:17.808319  431785 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 19:32:17.919787  431785 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 19:32:17.919851  431785 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 19:32:17.924024  431785 start.go:563] Will wait 60s for crictl version
	I1014 19:32:17.924066  431785 ssh_runner.go:195] Run: which crictl
	I1014 19:32:17.927557  431785 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1014 19:32:17.951542  431785 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1014 19:32:17.951601  431785 ssh_runner.go:195] Run: crio --version
	I1014 19:32:17.979952  431785 ssh_runner.go:195] Run: crio --version
	I1014 19:32:18.010707  431785 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1014 19:32:18.011830  431785 cli_runner.go:164] Run: docker network inspect functional-744288 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1014 19:32:18.028860  431785 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1014 19:32:18.033097  431785 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 19:32:18.043521  431785 kubeadm.go:883] updating cluster {Name:functional-744288 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-744288 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 19:32:18.043635  431785 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 19:32:18.043694  431785 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 19:32:18.075329  431785 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 19:32:18.075341  431785 crio.go:433] Images already preloaded, skipping extraction
	I1014 19:32:18.075385  431785 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 19:32:18.102261  431785 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 19:32:18.102274  431785 cache_images.go:85] Images are preloaded, skipping loading
	I1014 19:32:18.102280  431785 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1014 19:32:18.102369  431785 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-744288 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-744288 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 19:32:18.102425  431785 ssh_runner.go:195] Run: crio config
	I1014 19:32:18.147727  431785 cni.go:84] Creating CNI manager for ""
	I1014 19:32:18.147737  431785 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1014 19:32:18.147778  431785 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1014 19:32:18.147804  431785 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-744288 NodeName:functional-744288 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1014 19:32:18.147924  431785 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-744288"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 19:32:18.147982  431785 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1014 19:32:18.156465  431785 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 19:32:18.156521  431785 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1014 19:32:18.164411  431785 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1014 19:32:18.177189  431785 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 19:32:18.193167  431785 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1014 19:32:18.205965  431785 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1014 19:32:18.209722  431785 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 19:32:18.219844  431785 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 19:32:18.297641  431785 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 19:32:18.323391  431785 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288 for IP: 192.168.49.2
	I1014 19:32:18.323406  431785 certs.go:195] generating shared ca certs ...
	I1014 19:32:18.323428  431785 certs.go:227] acquiring lock for ca certs: {Name:mk332cc83f1e1747534fc81e896bed94f24941d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 19:32:18.323592  431785 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.key
	I1014 19:32:18.323637  431785 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.key
	I1014 19:32:18.323645  431785 certs.go:257] generating profile certs ...
	I1014 19:32:18.323713  431785 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/client.key
	I1014 19:32:18.323734  431785 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/client.crt with IP's: []
	I1014 19:32:18.488361  431785 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/client.crt ...
	I1014 19:32:18.488381  431785 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/client.crt: {Name:mk4dc76794e5ff697486809135792b05a3cc165b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 19:32:18.488581  431785 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/client.key ...
	I1014 19:32:18.488589  431785 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/client.key: {Name:mk55d2ebae294521100ca6c0bcd1a312969a5616 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 19:32:18.488672  431785 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/apiserver.key.d065d9e2
	I1014 19:32:18.488683  431785 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/apiserver.crt.d065d9e2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1014 19:32:18.825111  431785 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/apiserver.crt.d065d9e2 ...
	I1014 19:32:18.825131  431785 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/apiserver.crt.d065d9e2: {Name:mk15d2018e3f70de8afb5aa19a1685e93ca56889 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 19:32:18.825315  431785 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/apiserver.key.d065d9e2 ...
	I1014 19:32:18.825324  431785 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/apiserver.key.d065d9e2: {Name:mkff90260f24c19e49145f2ca5a9fc15b8cda9df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 19:32:18.825403  431785 certs.go:382] copying /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/apiserver.crt.d065d9e2 -> /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/apiserver.crt
	I1014 19:32:18.825522  431785 certs.go:386] copying /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/apiserver.key.d065d9e2 -> /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/apiserver.key
	I1014 19:32:18.825576  431785 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/proxy-client.key
	I1014 19:32:18.825599  431785 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/proxy-client.crt with IP's: []
	I1014 19:32:19.106819  431785 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/proxy-client.crt ...
	I1014 19:32:19.106850  431785 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/proxy-client.crt: {Name:mk9180068997be38ac54c2ccf0467d1389813c50 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 19:32:19.107044  431785 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/proxy-client.key ...
	I1014 19:32:19.107054  431785 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/proxy-client.key: {Name:mke38306bee5fea0f86ca7788eee7d46c60696dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 19:32:19.107233  431785 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/417373.pem (1338 bytes)
	W1014 19:32:19.107270  431785 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-413763/.minikube/certs/417373_empty.pem, impossibly tiny 0 bytes
	I1014 19:32:19.107276  431785 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca-key.pem (1675 bytes)
	I1014 19:32:19.107295  431785 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem (1078 bytes)
	I1014 19:32:19.107314  431785 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem (1123 bytes)
	I1014 19:32:19.107332  431785 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem (1675 bytes)
	I1014 19:32:19.107364  431785 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem (1708 bytes)
	I1014 19:32:19.108047  431785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 19:32:19.126687  431785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1014 19:32:19.144301  431785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 19:32:19.161342  431785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1014 19:32:19.179074  431785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1014 19:32:19.197080  431785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1014 19:32:19.214244  431785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 19:32:19.231186  431785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1014 19:32:19.248484  431785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 19:32:19.267800  431785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/certs/417373.pem --> /usr/share/ca-certificates/417373.pem (1338 bytes)
	I1014 19:32:19.285404  431785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem --> /usr/share/ca-certificates/4173732.pem (1708 bytes)
	I1014 19:32:19.303348  431785 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 19:32:19.316308  431785 ssh_runner.go:195] Run: openssl version
	I1014 19:32:19.322796  431785 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 19:32:19.331505  431785 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 19:32:19.335517  431785 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 19:15 /usr/share/ca-certificates/minikubeCA.pem
	I1014 19:32:19.335561  431785 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 19:32:19.370245  431785 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 19:32:19.379501  431785 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/417373.pem && ln -fs /usr/share/ca-certificates/417373.pem /etc/ssl/certs/417373.pem"
	I1014 19:32:19.388223  431785 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/417373.pem
	I1014 19:32:19.392048  431785 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 19:32 /usr/share/ca-certificates/417373.pem
	I1014 19:32:19.392100  431785 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/417373.pem
	I1014 19:32:19.426425  431785 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/417373.pem /etc/ssl/certs/51391683.0"
	I1014 19:32:19.435551  431785 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4173732.pem && ln -fs /usr/share/ca-certificates/4173732.pem /etc/ssl/certs/4173732.pem"
	I1014 19:32:19.444243  431785 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4173732.pem
	I1014 19:32:19.447952  431785 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 19:32 /usr/share/ca-certificates/4173732.pem
	I1014 19:32:19.448002  431785 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4173732.pem
	I1014 19:32:19.482440  431785 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4173732.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 19:32:19.491568  431785 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 19:32:19.495357  431785 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1014 19:32:19.495465  431785 kubeadm.go:400] StartCluster: {Name:functional-744288 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-744288 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 19:32:19.495552  431785 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 19:32:19.495638  431785 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 19:32:19.524600  431785 cri.go:89] found id: ""
	I1014 19:32:19.524658  431785 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 19:32:19.534062  431785 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 19:32:19.542196  431785 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1014 19:32:19.542241  431785 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 19:32:19.549848  431785 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 19:32:19.549858  431785 kubeadm.go:157] found existing configuration files:
	
	I1014 19:32:19.549899  431785 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1014 19:32:19.557474  431785 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 19:32:19.557514  431785 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 19:32:19.564738  431785 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1014 19:32:19.572216  431785 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 19:32:19.572265  431785 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 19:32:19.579724  431785 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1014 19:32:19.587587  431785 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 19:32:19.587630  431785 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 19:32:19.595287  431785 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1014 19:32:19.602886  431785 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 19:32:19.602930  431785 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 19:32:19.610377  431785 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1014 19:32:19.684339  431785 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1014 19:32:19.746714  431785 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1014 19:36:24.284488  431785 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1014 19:36:24.284623  431785 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1014 19:36:24.287971  431785 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1014 19:36:24.288058  431785 kubeadm.go:318] [preflight] Running pre-flight checks
	I1014 19:36:24.288249  431785 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1014 19:36:24.288362  431785 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1014 19:36:24.288437  431785 kubeadm.go:318] OS: Linux
	I1014 19:36:24.288538  431785 kubeadm.go:318] CGROUPS_CPU: enabled
	I1014 19:36:24.288648  431785 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1014 19:36:24.288737  431785 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1014 19:36:24.288803  431785 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1014 19:36:24.288845  431785 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1014 19:36:24.288882  431785 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1014 19:36:24.288945  431785 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1014 19:36:24.289018  431785 kubeadm.go:318] CGROUPS_IO: enabled
	I1014 19:36:24.289099  431785 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1014 19:36:24.289176  431785 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1014 19:36:24.289249  431785 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1014 19:36:24.289301  431785 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1014 19:36:24.291914  431785 out.go:252]   - Generating certificates and keys ...
	I1014 19:36:24.291994  431785 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1014 19:36:24.292059  431785 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1014 19:36:24.292122  431785 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1014 19:36:24.292193  431785 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1014 19:36:24.292246  431785 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1014 19:36:24.292284  431785 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1014 19:36:24.292324  431785 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1014 19:36:24.292414  431785 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [functional-744288 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1014 19:36:24.292463  431785 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1014 19:36:24.292552  431785 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [functional-744288 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1014 19:36:24.292610  431785 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1014 19:36:24.292686  431785 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1014 19:36:24.292722  431785 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1014 19:36:24.292792  431785 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1014 19:36:24.292837  431785 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1014 19:36:24.292889  431785 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1014 19:36:24.292929  431785 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1014 19:36:24.292979  431785 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1014 19:36:24.293024  431785 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1014 19:36:24.293087  431785 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1014 19:36:24.293144  431785 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1014 19:36:24.294638  431785 out.go:252]   - Booting up control plane ...
	I1014 19:36:24.294710  431785 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1014 19:36:24.294807  431785 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1014 19:36:24.294871  431785 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1014 19:36:24.294953  431785 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1014 19:36:24.295043  431785 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1014 19:36:24.295131  431785 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1014 19:36:24.295194  431785 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1014 19:36:24.295223  431785 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1014 19:36:24.295327  431785 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1014 19:36:24.295419  431785 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1014 19:36:24.295472  431785 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.000964072s
	I1014 19:36:24.295547  431785 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1014 19:36:24.295624  431785 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	I1014 19:36:24.295691  431785 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1014 19:36:24.295768  431785 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1014 19:36:24.295822  431785 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000354821s
	I1014 19:36:24.295903  431785 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000396646s
	I1014 19:36:24.295961  431785 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000782501s
	I1014 19:36:24.295964  431785 kubeadm.go:318] 
	I1014 19:36:24.296040  431785 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1014 19:36:24.296118  431785 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1014 19:36:24.296189  431785 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1014 19:36:24.296274  431785 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1014 19:36:24.296331  431785 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1014 19:36:24.296393  431785 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1014 19:36:24.296461  431785 kubeadm.go:318] 
	W1014 19:36:24.296582  431785 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [functional-744288 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [functional-744288 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.000964072s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000354821s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000396646s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000782501s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1014 19:36:24.296684  431785 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1014 19:36:24.739573  431785 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 19:36:24.752774  431785 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1014 19:36:24.752826  431785 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 19:36:24.761066  431785 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 19:36:24.761078  431785 kubeadm.go:157] found existing configuration files:
	
	I1014 19:36:24.761125  431785 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1014 19:36:24.768901  431785 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 19:36:24.768959  431785 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 19:36:24.776530  431785 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1014 19:36:24.784432  431785 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 19:36:24.784470  431785 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 19:36:24.792009  431785 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1014 19:36:24.799739  431785 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 19:36:24.799801  431785 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 19:36:24.808026  431785 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1014 19:36:24.816193  431785 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 19:36:24.816259  431785 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 19:36:24.824419  431785 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1014 19:36:24.883501  431785 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1014 19:36:24.944742  431785 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1014 19:40:28.148491  431785 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1014 19:40:28.148658  431785 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1014 19:40:28.151804  431785 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1014 19:40:28.151853  431785 kubeadm.go:318] [preflight] Running pre-flight checks
	I1014 19:40:28.151957  431785 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1014 19:40:28.152011  431785 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1014 19:40:28.152038  431785 kubeadm.go:318] OS: Linux
	I1014 19:40:28.152094  431785 kubeadm.go:318] CGROUPS_CPU: enabled
	I1014 19:40:28.152134  431785 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1014 19:40:28.152173  431785 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1014 19:40:28.152212  431785 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1014 19:40:28.152267  431785 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1014 19:40:28.152308  431785 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1014 19:40:28.152348  431785 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1014 19:40:28.152380  431785 kubeadm.go:318] CGROUPS_IO: enabled
	I1014 19:40:28.152453  431785 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1014 19:40:28.152557  431785 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1014 19:40:28.152652  431785 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1014 19:40:28.152718  431785 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1014 19:40:28.155207  431785 out.go:252]   - Generating certificates and keys ...
	I1014 19:40:28.155295  431785 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1014 19:40:28.155368  431785 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1014 19:40:28.155468  431785 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1014 19:40:28.155533  431785 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1014 19:40:28.155613  431785 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1014 19:40:28.155662  431785 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1014 19:40:28.155733  431785 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1014 19:40:28.155822  431785 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1014 19:40:28.155956  431785 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1014 19:40:28.156021  431785 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1014 19:40:28.156051  431785 kubeadm.go:318] [certs] Using the existing "sa" key
	I1014 19:40:28.156125  431785 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1014 19:40:28.156205  431785 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1014 19:40:28.156276  431785 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1014 19:40:28.156335  431785 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1014 19:40:28.156429  431785 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1014 19:40:28.156504  431785 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1014 19:40:28.156611  431785 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1014 19:40:28.156701  431785 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1014 19:40:28.158978  431785 out.go:252]   - Booting up control plane ...
	I1014 19:40:28.159069  431785 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1014 19:40:28.159149  431785 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1014 19:40:28.159219  431785 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1014 19:40:28.159299  431785 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1014 19:40:28.159370  431785 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1014 19:40:28.159453  431785 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1014 19:40:28.159516  431785 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1014 19:40:28.159545  431785 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1014 19:40:28.159689  431785 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1014 19:40:28.159826  431785 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1014 19:40:28.159875  431785 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001067025s
	I1014 19:40:28.159949  431785 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1014 19:40:28.160010  431785 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	I1014 19:40:28.160076  431785 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1014 19:40:28.160139  431785 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1014 19:40:28.160214  431785 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000245486s
	I1014 19:40:28.160280  431785 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000566773s
	I1014 19:40:28.160351  431785 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000489507s
	I1014 19:40:28.160354  431785 kubeadm.go:318] 
	I1014 19:40:28.160427  431785 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1014 19:40:28.160495  431785 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1014 19:40:28.160566  431785 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1014 19:40:28.160643  431785 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1014 19:40:28.160733  431785 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1014 19:40:28.160850  431785 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1014 19:40:28.160872  431785 kubeadm.go:318] 
	I1014 19:40:28.160993  431785 kubeadm.go:402] duration metric: took 8m8.66553502s to StartCluster
	I1014 19:40:28.161053  431785 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:40:28.161116  431785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:40:28.189622  431785 cri.go:89] found id: ""
	I1014 19:40:28.189663  431785 logs.go:282] 0 containers: []
	W1014 19:40:28.189673  431785 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:40:28.189680  431785 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:40:28.189749  431785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:40:28.217289  431785 cri.go:89] found id: ""
	I1014 19:40:28.217311  431785 logs.go:282] 0 containers: []
	W1014 19:40:28.217320  431785 logs.go:284] No container was found matching "etcd"
	I1014 19:40:28.217326  431785 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:40:28.217392  431785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:40:28.244675  431785 cri.go:89] found id: ""
	I1014 19:40:28.244692  431785 logs.go:282] 0 containers: []
	W1014 19:40:28.244699  431785 logs.go:284] No container was found matching "coredns"
	I1014 19:40:28.244704  431785 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:40:28.244766  431785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:40:28.272242  431785 cri.go:89] found id: ""
	I1014 19:40:28.272261  431785 logs.go:282] 0 containers: []
	W1014 19:40:28.272269  431785 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:40:28.272274  431785 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:40:28.272325  431785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:40:28.298843  431785 cri.go:89] found id: ""
	I1014 19:40:28.298860  431785 logs.go:282] 0 containers: []
	W1014 19:40:28.298867  431785 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:40:28.298872  431785 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:40:28.298923  431785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:40:28.325714  431785 cri.go:89] found id: ""
	I1014 19:40:28.325730  431785 logs.go:282] 0 containers: []
	W1014 19:40:28.325736  431785 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:40:28.325740  431785 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:40:28.325804  431785 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:40:28.351941  431785 cri.go:89] found id: ""
	I1014 19:40:28.351958  431785 logs.go:282] 0 containers: []
	W1014 19:40:28.351966  431785 logs.go:284] No container was found matching "kindnet"
	I1014 19:40:28.351975  431785 logs.go:123] Gathering logs for kubelet ...
	I1014 19:40:28.351988  431785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:40:28.423321  431785 logs.go:123] Gathering logs for dmesg ...
	I1014 19:40:28.423345  431785 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:40:28.441206  431785 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:40:28.441225  431785 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:40:28.502815  431785 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:40:28.495547    2427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:40:28.496148    2427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:40:28.497437    2427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:40:28.497961    2427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:40:28.499545    2427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:40:28.495547    2427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:40:28.496148    2427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:40:28.497437    2427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:40:28.497961    2427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:40:28.499545    2427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:40:28.502828  431785 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:40:28.502848  431785 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:40:28.563655  431785 logs.go:123] Gathering logs for container status ...
	I1014 19:40:28.563682  431785 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1014 19:40:28.594854  431785 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001067025s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000245486s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000566773s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000489507s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1014 19:40:28.594919  431785 out.go:285] * 
	W1014 19:40:28.595006  431785 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001067025s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000245486s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000566773s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000489507s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1014 19:40:28.595020  431785 out.go:285] * 
	W1014 19:40:28.597252  431785 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 19:40:28.600878  431785 out.go:203] 
	W1014 19:40:28.602238  431785 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001067025s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000245486s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000566773s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000489507s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1014 19:40:28.602280  431785 out.go:285] * 
	I1014 19:40:28.603866  431785 out.go:203] 
	
	
	==> CRI-O <==
	Oct 14 19:40:20 functional-744288 crio[789]: time="2025-10-14T19:40:20.859551487Z" level=info msg="createCtr: removing container 83410b269992dbf835260937846dc3cf820a958a82307892ff5d3a705f32af81" id=893d3799-4e29-4417-83c1-b439e0226525 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:40:20 functional-744288 crio[789]: time="2025-10-14T19:40:20.859584298Z" level=info msg="createCtr: deleting container 83410b269992dbf835260937846dc3cf820a958a82307892ff5d3a705f32af81 from storage" id=893d3799-4e29-4417-83c1-b439e0226525 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:40:20 functional-744288 crio[789]: time="2025-10-14T19:40:20.861901262Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-functional-744288_kube-system_7dacb23619ff0889511bcb2e81339e77_0" id=893d3799-4e29-4417-83c1-b439e0226525 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:40:25 functional-744288 crio[789]: time="2025-10-14T19:40:25.836608939Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=8cff4714-efe8-4ee1-9460-4a93930df187 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 19:40:25 functional-744288 crio[789]: time="2025-10-14T19:40:25.836773082Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=f932bcdd-4ef7-4443-972c-6db1b4fb5b35 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 19:40:25 functional-744288 crio[789]: time="2025-10-14T19:40:25.837504513Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=15b9c4e8-465f-4e45-b19b-c7ea7f57c2be name=/runtime.v1.ImageService/ImageStatus
	Oct 14 19:40:25 functional-744288 crio[789]: time="2025-10-14T19:40:25.837510419Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=fee0cef8-00e0-4fe0-92f6-3cf9d843390c name=/runtime.v1.ImageService/ImageStatus
	Oct 14 19:40:25 functional-744288 crio[789]: time="2025-10-14T19:40:25.838492624Z" level=info msg="Creating container: kube-system/kube-controller-manager-functional-744288/kube-controller-manager" id=6655339a-89af-4d69-9798-d0fe15cd476f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:40:25 functional-744288 crio[789]: time="2025-10-14T19:40:25.838492525Z" level=info msg="Creating container: kube-system/kube-scheduler-functional-744288/kube-scheduler" id=230faf44-34ac-431e-ad71-b152b470f0ca name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:40:25 functional-744288 crio[789]: time="2025-10-14T19:40:25.838735739Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 19:40:25 functional-744288 crio[789]: time="2025-10-14T19:40:25.838916994Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 19:40:25 functional-744288 crio[789]: time="2025-10-14T19:40:25.8433202Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 19:40:25 functional-744288 crio[789]: time="2025-10-14T19:40:25.843742214Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 19:40:25 functional-744288 crio[789]: time="2025-10-14T19:40:25.844834615Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 19:40:25 functional-744288 crio[789]: time="2025-10-14T19:40:25.84537086Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 19:40:25 functional-744288 crio[789]: time="2025-10-14T19:40:25.863972051Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=6655339a-89af-4d69-9798-d0fe15cd476f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:40:25 functional-744288 crio[789]: time="2025-10-14T19:40:25.864831599Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=230faf44-34ac-431e-ad71-b152b470f0ca name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:40:25 functional-744288 crio[789]: time="2025-10-14T19:40:25.865401764Z" level=info msg="createCtr: deleting container ID a7206267e9b30ec7ba205bca8906174edbcfacd4d1584c948a44281c31dfb12f from idIndex" id=6655339a-89af-4d69-9798-d0fe15cd476f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:40:25 functional-744288 crio[789]: time="2025-10-14T19:40:25.8654399Z" level=info msg="createCtr: removing container a7206267e9b30ec7ba205bca8906174edbcfacd4d1584c948a44281c31dfb12f" id=6655339a-89af-4d69-9798-d0fe15cd476f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:40:25 functional-744288 crio[789]: time="2025-10-14T19:40:25.865484355Z" level=info msg="createCtr: deleting container a7206267e9b30ec7ba205bca8906174edbcfacd4d1584c948a44281c31dfb12f from storage" id=6655339a-89af-4d69-9798-d0fe15cd476f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:40:25 functional-744288 crio[789]: time="2025-10-14T19:40:25.866225264Z" level=info msg="createCtr: deleting container ID 4bb9001bd0ee1a50dc83cf5a038838141cfa871fa4c01857bcdf2d59034a1fe9 from idIndex" id=230faf44-34ac-431e-ad71-b152b470f0ca name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:40:25 functional-744288 crio[789]: time="2025-10-14T19:40:25.866266906Z" level=info msg="createCtr: removing container 4bb9001bd0ee1a50dc83cf5a038838141cfa871fa4c01857bcdf2d59034a1fe9" id=230faf44-34ac-431e-ad71-b152b470f0ca name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:40:25 functional-744288 crio[789]: time="2025-10-14T19:40:25.866301961Z" level=info msg="createCtr: deleting container 4bb9001bd0ee1a50dc83cf5a038838141cfa871fa4c01857bcdf2d59034a1fe9 from storage" id=230faf44-34ac-431e-ad71-b152b470f0ca name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:40:25 functional-744288 crio[789]: time="2025-10-14T19:40:25.869372151Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-functional-744288_kube-system_b1fd55382fcf5a735f17d7c6c4ddad91_0" id=6655339a-89af-4d69-9798-d0fe15cd476f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:40:25 functional-744288 crio[789]: time="2025-10-14T19:40:25.86960849Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-functional-744288_kube-system_e9679524bf37cc2b727411d0e5a93bfe_0" id=230faf44-34ac-431e-ad71-b152b470f0ca name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:40:29.536244    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:40:29.536821    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:40:29.538387    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:40:29.538935    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:40:29.540460    2574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 91 42 a8 46 f5 08 06
	[ +33.952077] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff aa 17 60 38 7c 63 08 06
	[  +0.961658] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ba 77 81 98 05 a1 08 06
	[  +0.043334] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 8a 2f 57 55 4c d7 08 06
	[  +4.681081] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f6 ac 51 a4 b2 5a 08 06
	[Oct14 19:04] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e6 d1 de e1 85 25 08 06
	[  +0.899784] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ae f2 fe 51 3d 95 08 06
	[  +0.040749] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 82 c6 36 89 b8 4a 08 06
	[  +4.162603] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c6 62 d5 5c 0e ef 08 06
	[ +30.801371] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 06 ae e5 28 c7 39 08 06
	[  +0.817897] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ae 5e 5e 90 62 1a 08 06
	[  +0.046959] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 96 8f 1b bd b9 9f 08 06
	[Oct14 19:05] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 56 59 c3 7c db c4 08 06
	
	
	==> kernel <==
	 19:40:29 up  2:22,  0 user,  load average: 0.00, 0.06, 3.35
	Linux functional-744288 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 14 19:40:20 functional-744288 kubelet[1809]:  > logger="UnhandledError"
	Oct 14 19:40:20 functional-744288 kubelet[1809]: E1014 19:40:20.862369    1809 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-functional-744288" podUID="7dacb23619ff0889511bcb2e81339e77"
	Oct 14 19:40:24 functional-744288 kubelet[1809]: E1014 19:40:24.458048    1809 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-744288?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Oct 14 19:40:24 functional-744288 kubelet[1809]: I1014 19:40:24.621349    1809 kubelet_node_status.go:75] "Attempting to register node" node="functional-744288"
	Oct 14 19:40:24 functional-744288 kubelet[1809]: E1014 19:40:24.621731    1809 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-744288"
	Oct 14 19:40:25 functional-744288 kubelet[1809]: E1014 19:40:25.589814    1809 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8441/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-744288.186e72ac19058e88  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-744288,UID:functional-744288,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-744288 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-744288,},FirstTimestamp:2025-10-14 19:36:27.828178568 +0000 UTC m=+0.685163688,LastTimestamp:2025-10-14 19:36:27.828178568 +0000 UTC m=+0.685163688,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-744288,}"
	Oct 14 19:40:25 functional-744288 kubelet[1809]: E1014 19:40:25.836127    1809 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-744288\" not found" node="functional-744288"
	Oct 14 19:40:25 functional-744288 kubelet[1809]: E1014 19:40:25.836300    1809 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-744288\" not found" node="functional-744288"
	Oct 14 19:40:25 functional-744288 kubelet[1809]: E1014 19:40:25.869740    1809 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 14 19:40:25 functional-744288 kubelet[1809]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 14 19:40:25 functional-744288 kubelet[1809]:  > podSandboxID="e8186070b2ac7bccf45cf53cdedb42b8128ae6650737da34ded6f3d9a5f75310"
	Oct 14 19:40:25 functional-744288 kubelet[1809]: E1014 19:40:25.869867    1809 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 14 19:40:25 functional-744288 kubelet[1809]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 14 19:40:25 functional-744288 kubelet[1809]:  > podSandboxID="a0f826f1bcdda21916898df58520d48e616e58d88f26d4a2d42009ebb731c254"
	Oct 14 19:40:25 functional-744288 kubelet[1809]: E1014 19:40:25.869924    1809 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 14 19:40:25 functional-744288 kubelet[1809]:         container kube-controller-manager start failed in pod kube-controller-manager-functional-744288_kube-system(b1fd55382fcf5a735f17d7c6c4ddad91): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 14 19:40:25 functional-744288 kubelet[1809]:  > logger="UnhandledError"
	Oct 14 19:40:25 functional-744288 kubelet[1809]: E1014 19:40:25.869961    1809 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 14 19:40:25 functional-744288 kubelet[1809]:         container kube-scheduler start failed in pod kube-scheduler-functional-744288_kube-system(e9679524bf37cc2b727411d0e5a93bfe): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 14 19:40:25 functional-744288 kubelet[1809]:  > logger="UnhandledError"
	Oct 14 19:40:25 functional-744288 kubelet[1809]: E1014 19:40:25.869962    1809 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-functional-744288" podUID="b1fd55382fcf5a735f17d7c6c4ddad91"
	Oct 14 19:40:25 functional-744288 kubelet[1809]: E1014 19:40:25.871136    1809 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-functional-744288" podUID="e9679524bf37cc2b727411d0e5a93bfe"
	Oct 14 19:40:26 functional-744288 kubelet[1809]: E1014 19:40:26.466489    1809 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8441/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	Oct 14 19:40:26 functional-744288 kubelet[1809]: E1014 19:40:26.552425    1809 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	Oct 14 19:40:27 functional-744288 kubelet[1809]: E1014 19:40:27.854352    1809 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-744288\" not found"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-744288 -n functional-744288
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-744288 -n functional-744288: exit status 6 (308.21564ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1014 19:40:29.935954  437144 status.go:458] kubeconfig endpoint: get endpoint: "functional-744288" does not appear in /home/jenkins/minikube-integration/21409-413763/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "functional-744288" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/StartWithProxy (503.57s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (366.74s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1014 19:40:29.954018  417373 config.go:182] Loaded profile config "functional-744288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-744288 --alsologtostderr -v=8
functional_test.go:674: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-744288 --alsologtostderr -v=8: exit status 80 (6m4.082913332s)

                                                
                                                
-- stdout --
	* [functional-744288] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21409
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21409-413763/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-413763/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "functional-744288" primary control-plane node in "functional-744288" cluster
	* Pulling base image v0.0.48-1759745255-21703 ...
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 19:40:29.999204  437269 out.go:360] Setting OutFile to fd 1 ...
	I1014 19:40:29.999451  437269 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 19:40:29.999459  437269 out.go:374] Setting ErrFile to fd 2...
	I1014 19:40:29.999463  437269 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 19:40:29.999664  437269 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-413763/.minikube/bin
	I1014 19:40:30.000162  437269 out.go:368] Setting JSON to false
	I1014 19:40:30.001140  437269 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":8576,"bootTime":1760462254,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1014 19:40:30.001253  437269 start.go:141] virtualization: kvm guest
	I1014 19:40:30.003929  437269 out.go:179] * [functional-744288] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1014 19:40:30.005394  437269 out.go:179]   - MINIKUBE_LOCATION=21409
	I1014 19:40:30.005413  437269 notify.go:220] Checking for updates...
	I1014 19:40:30.008578  437269 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 19:40:30.009922  437269 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-413763/kubeconfig
	I1014 19:40:30.011325  437269 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-413763/.minikube
	I1014 19:40:30.012721  437269 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1014 19:40:30.014074  437269 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 19:40:30.015738  437269 config.go:182] Loaded profile config "functional-744288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 19:40:30.015851  437269 driver.go:421] Setting default libvirt URI to qemu:///system
	I1014 19:40:30.041344  437269 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1014 19:40:30.041571  437269 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 19:40:30.106855  437269 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-14 19:40:30.095983875 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1014 19:40:30.106976  437269 docker.go:318] overlay module found
	I1014 19:40:30.108953  437269 out.go:179] * Using the docker driver based on existing profile
	I1014 19:40:30.110337  437269 start.go:305] selected driver: docker
	I1014 19:40:30.110363  437269 start.go:925] validating driver "docker" against &{Name:functional-744288 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-744288 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 19:40:30.110446  437269 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 19:40:30.110529  437269 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 19:40:30.176521  437269 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-14 19:40:30.165510899 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1014 19:40:30.177154  437269 cni.go:84] Creating CNI manager for ""
	I1014 19:40:30.177215  437269 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1014 19:40:30.177273  437269 start.go:349] cluster config:
	{Name:functional-744288 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-744288 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 19:40:30.179329  437269 out.go:179] * Starting "functional-744288" primary control-plane node in "functional-744288" cluster
	I1014 19:40:30.180795  437269 cache.go:123] Beginning downloading kic base image for docker with crio
	I1014 19:40:30.182356  437269 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1014 19:40:30.183701  437269 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 19:40:30.183742  437269 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-413763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1014 19:40:30.183752  437269 cache.go:58] Caching tarball of preloaded images
	I1014 19:40:30.183799  437269 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1014 19:40:30.183863  437269 preload.go:233] Found /home/jenkins/minikube-integration/21409-413763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1014 19:40:30.183877  437269 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1014 19:40:30.183979  437269 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/config.json ...
	I1014 19:40:30.204077  437269 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1014 19:40:30.204098  437269 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1014 19:40:30.204114  437269 cache.go:232] Successfully downloaded all kic artifacts
	I1014 19:40:30.204155  437269 start.go:360] acquireMachinesLock for functional-744288: {Name:mk27c3a9a4edec1c99a109c410361619ff35ec14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 19:40:30.204220  437269 start.go:364] duration metric: took 47.096µs to acquireMachinesLock for "functional-744288"
	I1014 19:40:30.204240  437269 start.go:96] Skipping create...Using existing machine configuration
	I1014 19:40:30.204245  437269 fix.go:54] fixHost starting: 
	I1014 19:40:30.204447  437269 cli_runner.go:164] Run: docker container inspect functional-744288 --format={{.State.Status}}
	I1014 19:40:30.222380  437269 fix.go:112] recreateIfNeeded on functional-744288: state=Running err=<nil>
	W1014 19:40:30.222430  437269 fix.go:138] unexpected machine state, will restart: <nil>
	I1014 19:40:30.224794  437269 out.go:252] * Updating the running docker "functional-744288" container ...
	I1014 19:40:30.224832  437269 machine.go:93] provisionDockerMachine start ...
	I1014 19:40:30.224915  437269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-744288
	I1014 19:40:30.243631  437269 main.go:141] libmachine: Using SSH client type: native
	I1014 19:40:30.243897  437269 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32898 <nil> <nil>}
	I1014 19:40:30.243914  437269 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 19:40:30.392088  437269 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-744288
	
	I1014 19:40:30.392121  437269 ubuntu.go:182] provisioning hostname "functional-744288"
	I1014 19:40:30.392200  437269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-744288
	I1014 19:40:30.410333  437269 main.go:141] libmachine: Using SSH client type: native
	I1014 19:40:30.410549  437269 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32898 <nil> <nil>}
	I1014 19:40:30.410563  437269 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-744288 && echo "functional-744288" | sudo tee /etc/hostname
	I1014 19:40:30.567306  437269 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-744288
	
	I1014 19:40:30.567398  437269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-744288
	I1014 19:40:30.585534  437269 main.go:141] libmachine: Using SSH client type: native
	I1014 19:40:30.585774  437269 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32898 <nil> <nil>}
	I1014 19:40:30.585794  437269 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-744288' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-744288/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-744288' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 19:40:30.733740  437269 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 19:40:30.733790  437269 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-413763/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-413763/.minikube}
	I1014 19:40:30.733813  437269 ubuntu.go:190] setting up certificates
	I1014 19:40:30.733825  437269 provision.go:84] configureAuth start
	I1014 19:40:30.733878  437269 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-744288
	I1014 19:40:30.751946  437269 provision.go:143] copyHostCerts
	I1014 19:40:30.751989  437269 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem
	I1014 19:40:30.752023  437269 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem, removing ...
	I1014 19:40:30.752048  437269 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem
	I1014 19:40:30.752133  437269 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem (1078 bytes)
	I1014 19:40:30.752237  437269 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem
	I1014 19:40:30.752267  437269 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem, removing ...
	I1014 19:40:30.752278  437269 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem
	I1014 19:40:30.752320  437269 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem (1123 bytes)
	I1014 19:40:30.752387  437269 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem
	I1014 19:40:30.752412  437269 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem, removing ...
	I1014 19:40:30.752422  437269 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem
	I1014 19:40:30.752463  437269 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem (1675 bytes)
	I1014 19:40:30.752709  437269 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca-key.pem org=jenkins.functional-744288 san=[127.0.0.1 192.168.49.2 functional-744288 localhost minikube]
	I1014 19:40:31.076864  437269 provision.go:177] copyRemoteCerts
	I1014 19:40:31.076930  437269 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 19:40:31.076971  437269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-744288
	I1014 19:40:31.095322  437269 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/functional-744288/id_rsa Username:docker}
	I1014 19:40:31.200396  437269 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1014 19:40:31.200473  437269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 19:40:31.218084  437269 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1014 19:40:31.218140  437269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1014 19:40:31.235905  437269 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1014 19:40:31.235974  437269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1014 19:40:31.253074  437269 provision.go:87] duration metric: took 519.232689ms to configureAuth
	I1014 19:40:31.253110  437269 ubuntu.go:206] setting minikube options for container-runtime
	I1014 19:40:31.253264  437269 config.go:182] Loaded profile config "functional-744288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 19:40:31.253357  437269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-744288
	I1014 19:40:31.271451  437269 main.go:141] libmachine: Using SSH client type: native
	I1014 19:40:31.271661  437269 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32898 <nil> <nil>}
	I1014 19:40:31.271677  437269 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 19:40:31.540521  437269 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 19:40:31.540549  437269 machine.go:96] duration metric: took 1.315709373s to provisionDockerMachine
	I1014 19:40:31.540561  437269 start.go:293] postStartSetup for "functional-744288" (driver="docker")
	I1014 19:40:31.540571  437269 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 19:40:31.540628  437269 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 19:40:31.540669  437269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-744288
	I1014 19:40:31.559297  437269 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/functional-744288/id_rsa Username:docker}
	I1014 19:40:31.665251  437269 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 19:40:31.669234  437269 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1014 19:40:31.669258  437269 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1014 19:40:31.669267  437269 command_runner.go:130] > VERSION_ID="12"
	I1014 19:40:31.669270  437269 command_runner.go:130] > VERSION="12 (bookworm)"
	I1014 19:40:31.669276  437269 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1014 19:40:31.669279  437269 command_runner.go:130] > ID=debian
	I1014 19:40:31.669283  437269 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1014 19:40:31.669288  437269 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1014 19:40:31.669293  437269 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1014 19:40:31.669341  437269 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1014 19:40:31.669359  437269 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1014 19:40:31.669371  437269 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-413763/.minikube/addons for local assets ...
	I1014 19:40:31.669425  437269 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-413763/.minikube/files for local assets ...
	I1014 19:40:31.669510  437269 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem -> 4173732.pem in /etc/ssl/certs
	I1014 19:40:31.669525  437269 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem -> /etc/ssl/certs/4173732.pem
	I1014 19:40:31.669592  437269 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/test/nested/copy/417373/hosts -> hosts in /etc/test/nested/copy/417373
	I1014 19:40:31.669600  437269 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/test/nested/copy/417373/hosts -> /etc/test/nested/copy/417373/hosts
	I1014 19:40:31.669633  437269 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/417373
	I1014 19:40:31.677988  437269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem --> /etc/ssl/certs/4173732.pem (1708 bytes)
	I1014 19:40:31.696543  437269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/test/nested/copy/417373/hosts --> /etc/test/nested/copy/417373/hosts (40 bytes)
	I1014 19:40:31.715275  437269 start.go:296] duration metric: took 174.687158ms for postStartSetup
	I1014 19:40:31.715383  437269 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1014 19:40:31.715428  437269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-744288
	I1014 19:40:31.734376  437269 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/functional-744288/id_rsa Username:docker}
	I1014 19:40:31.836456  437269 command_runner.go:130] > 39%
	I1014 19:40:31.836544  437269 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1014 19:40:31.841513  437269 command_runner.go:130] > 178G
	I1014 19:40:31.841552  437269 fix.go:56] duration metric: took 1.637302821s for fixHost
	I1014 19:40:31.841566  437269 start.go:83] releasing machines lock for "functional-744288", held for 1.637335022s
	I1014 19:40:31.841633  437269 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-744288
	I1014 19:40:31.859002  437269 ssh_runner.go:195] Run: cat /version.json
	I1014 19:40:31.859036  437269 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 19:40:31.859053  437269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-744288
	I1014 19:40:31.859093  437269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-744288
	I1014 19:40:31.877314  437269 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/functional-744288/id_rsa Username:docker}
	I1014 19:40:31.877547  437269 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/functional-744288/id_rsa Username:docker}
	I1014 19:40:31.978415  437269 command_runner.go:130] > {"iso_version": "v1.37.0-1758198818-20370", "kicbase_version": "v0.0.48-1759745255-21703", "minikube_version": "v1.37.0", "commit": "a51fe4b7ffc88febd8814e8831f38772e976d097"}
	I1014 19:40:31.978583  437269 ssh_runner.go:195] Run: systemctl --version
	I1014 19:40:32.030433  437269 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1014 19:40:32.032548  437269 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1014 19:40:32.032581  437269 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1014 19:40:32.032653  437269 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 19:40:32.071124  437269 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1014 19:40:32.075797  437269 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1014 19:40:32.076143  437269 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 19:40:32.076213  437269 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 19:40:32.084774  437269 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1014 19:40:32.084802  437269 start.go:495] detecting cgroup driver to use...
	I1014 19:40:32.084841  437269 detect.go:190] detected "systemd" cgroup driver on host os
	I1014 19:40:32.084885  437269 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 19:40:32.100807  437269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 19:40:32.114918  437269 docker.go:218] disabling cri-docker service (if available) ...
	I1014 19:40:32.115001  437269 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 19:40:32.131082  437269 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 19:40:32.145731  437269 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 19:40:32.234963  437269 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 19:40:32.329593  437269 docker.go:234] disabling docker service ...
	I1014 19:40:32.329671  437269 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 19:40:32.344729  437269 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 19:40:32.357712  437269 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 19:40:32.445038  437269 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 19:40:32.534134  437269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 19:40:32.547615  437269 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 19:40:32.562780  437269 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1014 19:40:32.562835  437269 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1014 19:40:32.562884  437269 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 19:40:32.572580  437269 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1014 19:40:32.572655  437269 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 19:40:32.581715  437269 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 19:40:32.590624  437269 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 19:40:32.599492  437269 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 19:40:32.607979  437269 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 19:40:32.617026  437269 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 19:40:32.625607  437269 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 19:40:32.634661  437269 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 19:40:32.642022  437269 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1014 19:40:32.642101  437269 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 19:40:32.649948  437269 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 19:40:32.737827  437269 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 19:40:32.854779  437269 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 19:40:32.854851  437269 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 19:40:32.859353  437269 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1014 19:40:32.859376  437269 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1014 19:40:32.859382  437269 command_runner.go:130] > Device: 0,59	Inode: 3887        Links: 1
	I1014 19:40:32.859389  437269 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1014 19:40:32.859394  437269 command_runner.go:130] > Access: 2025-10-14 19:40:32.837516724 +0000
	I1014 19:40:32.859399  437269 command_runner.go:130] > Modify: 2025-10-14 19:40:32.837516724 +0000
	I1014 19:40:32.859403  437269 command_runner.go:130] > Change: 2025-10-14 19:40:32.837516724 +0000
	I1014 19:40:32.859408  437269 command_runner.go:130] >  Birth: 2025-10-14 19:40:32.837516724 +0000
	I1014 19:40:32.859438  437269 start.go:563] Will wait 60s for crictl version
	I1014 19:40:32.859485  437269 ssh_runner.go:195] Run: which crictl
	I1014 19:40:32.863222  437269 command_runner.go:130] > /usr/local/bin/crictl
	I1014 19:40:32.863312  437269 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1014 19:40:32.889462  437269 command_runner.go:130] > Version:  0.1.0
	I1014 19:40:32.889482  437269 command_runner.go:130] > RuntimeName:  cri-o
	I1014 19:40:32.889486  437269 command_runner.go:130] > RuntimeVersion:  1.34.1
	I1014 19:40:32.889490  437269 command_runner.go:130] > RuntimeApiVersion:  v1
	I1014 19:40:32.889505  437269 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1014 19:40:32.889559  437269 ssh_runner.go:195] Run: crio --version
	I1014 19:40:32.920224  437269 command_runner.go:130] > crio version 1.34.1
	I1014 19:40:32.920251  437269 command_runner.go:130] >    GitCommit:      8e14bff4153ba033f12ed3ffa3cadaca5425b313
	I1014 19:40:32.920258  437269 command_runner.go:130] >    GitCommitDate:  2025-10-01T13:04:13Z
	I1014 19:40:32.920266  437269 command_runner.go:130] >    GitTreeState:   dirty
	I1014 19:40:32.920279  437269 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1014 19:40:32.920285  437269 command_runner.go:130] >    GoVersion:      go1.24.6
	I1014 19:40:32.920291  437269 command_runner.go:130] >    Compiler:       gc
	I1014 19:40:32.920303  437269 command_runner.go:130] >    Platform:       linux/amd64
	I1014 19:40:32.920312  437269 command_runner.go:130] >    Linkmode:       static
	I1014 19:40:32.920322  437269 command_runner.go:130] >    BuildTags:
	I1014 19:40:32.920332  437269 command_runner.go:130] >      static
	I1014 19:40:32.920340  437269 command_runner.go:130] >      netgo
	I1014 19:40:32.920347  437269 command_runner.go:130] >      osusergo
	I1014 19:40:32.920354  437269 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1014 19:40:32.920358  437269 command_runner.go:130] >      seccomp
	I1014 19:40:32.920361  437269 command_runner.go:130] >      apparmor
	I1014 19:40:32.920367  437269 command_runner.go:130] >      selinux
	I1014 19:40:32.920371  437269 command_runner.go:130] >    LDFlags:          unknown
	I1014 19:40:32.920379  437269 command_runner.go:130] >    SeccompEnabled:   true
	I1014 19:40:32.920383  437269 command_runner.go:130] >    AppArmorEnabled:  false
	I1014 19:40:32.920453  437269 ssh_runner.go:195] Run: crio --version
	I1014 19:40:32.949467  437269 command_runner.go:130] > crio version 1.34.1
	I1014 19:40:32.949490  437269 command_runner.go:130] >    GitCommit:      8e14bff4153ba033f12ed3ffa3cadaca5425b313
	I1014 19:40:32.949495  437269 command_runner.go:130] >    GitCommitDate:  2025-10-01T13:04:13Z
	I1014 19:40:32.949499  437269 command_runner.go:130] >    GitTreeState:   dirty
	I1014 19:40:32.949504  437269 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1014 19:40:32.949508  437269 command_runner.go:130] >    GoVersion:      go1.24.6
	I1014 19:40:32.949514  437269 command_runner.go:130] >    Compiler:       gc
	I1014 19:40:32.949525  437269 command_runner.go:130] >    Platform:       linux/amd64
	I1014 19:40:32.949534  437269 command_runner.go:130] >    Linkmode:       static
	I1014 19:40:32.949540  437269 command_runner.go:130] >    BuildTags:
	I1014 19:40:32.949546  437269 command_runner.go:130] >      static
	I1014 19:40:32.949555  437269 command_runner.go:130] >      netgo
	I1014 19:40:32.949560  437269 command_runner.go:130] >      osusergo
	I1014 19:40:32.949567  437269 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1014 19:40:32.949571  437269 command_runner.go:130] >      seccomp
	I1014 19:40:32.949576  437269 command_runner.go:130] >      apparmor
	I1014 19:40:32.949582  437269 command_runner.go:130] >      selinux
	I1014 19:40:32.949588  437269 command_runner.go:130] >    LDFlags:          unknown
	I1014 19:40:32.949592  437269 command_runner.go:130] >    SeccompEnabled:   true
	I1014 19:40:32.949599  437269 command_runner.go:130] >    AppArmorEnabled:  false
	I1014 19:40:32.952722  437269 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1014 19:40:32.953989  437269 cli_runner.go:164] Run: docker network inspect functional-744288 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1014 19:40:32.971672  437269 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1014 19:40:32.976098  437269 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1014 19:40:32.976178  437269 kubeadm.go:883] updating cluster {Name:functional-744288 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-744288 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 19:40:32.976267  437269 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 19:40:32.976332  437269 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 19:40:33.006155  437269 command_runner.go:130] > {
	I1014 19:40:33.006181  437269 command_runner.go:130] >   "images":  [
	I1014 19:40:33.006186  437269 command_runner.go:130] >     {
	I1014 19:40:33.006194  437269 command_runner.go:130] >       "id":  "409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c",
	I1014 19:40:33.006200  437269 command_runner.go:130] >       "repoTags":  [
	I1014 19:40:33.006209  437269 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1014 19:40:33.006213  437269 command_runner.go:130] >       ],
	I1014 19:40:33.006218  437269 command_runner.go:130] >       "repoDigests":  [
	I1014 19:40:33.006232  437269 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1014 19:40:33.006248  437269 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"
	I1014 19:40:33.006257  437269 command_runner.go:130] >       ],
	I1014 19:40:33.006270  437269 command_runner.go:130] >       "size":  "109379124",
	I1014 19:40:33.006276  437269 command_runner.go:130] >       "username":  "",
	I1014 19:40:33.006281  437269 command_runner.go:130] >       "pinned":  false
	I1014 19:40:33.006287  437269 command_runner.go:130] >     },
	I1014 19:40:33.006290  437269 command_runner.go:130] >     {
	I1014 19:40:33.006304  437269 command_runner.go:130] >       "id":  "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1014 19:40:33.006316  437269 command_runner.go:130] >       "repoTags":  [
	I1014 19:40:33.006324  437269 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1014 19:40:33.006330  437269 command_runner.go:130] >       ],
	I1014 19:40:33.006335  437269 command_runner.go:130] >       "repoDigests":  [
	I1014 19:40:33.006348  437269 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1014 19:40:33.006364  437269 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1014 19:40:33.006372  437269 command_runner.go:130] >       ],
	I1014 19:40:33.006379  437269 command_runner.go:130] >       "size":  "31470524",
	I1014 19:40:33.006388  437269 command_runner.go:130] >       "username":  "",
	I1014 19:40:33.006398  437269 command_runner.go:130] >       "pinned":  false
	I1014 19:40:33.006402  437269 command_runner.go:130] >     },
	I1014 19:40:33.006405  437269 command_runner.go:130] >     {
	I1014 19:40:33.006413  437269 command_runner.go:130] >       "id":  "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969",
	I1014 19:40:33.006422  437269 command_runner.go:130] >       "repoTags":  [
	I1014 19:40:33.006431  437269 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.12.1"
	I1014 19:40:33.006441  437269 command_runner.go:130] >       ],
	I1014 19:40:33.006448  437269 command_runner.go:130] >       "repoDigests":  [
	I1014 19:40:33.006463  437269 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998",
	I1014 19:40:33.006477  437269 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"
	I1014 19:40:33.006486  437269 command_runner.go:130] >       ],
	I1014 19:40:33.006496  437269 command_runner.go:130] >       "size":  "76103547",
	I1014 19:40:33.006505  437269 command_runner.go:130] >       "username":  "nonroot",
	I1014 19:40:33.006513  437269 command_runner.go:130] >       "pinned":  false
	I1014 19:40:33.006516  437269 command_runner.go:130] >     },
	I1014 19:40:33.006525  437269 command_runner.go:130] >     {
	I1014 19:40:33.006535  437269 command_runner.go:130] >       "id":  "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115",
	I1014 19:40:33.006545  437269 command_runner.go:130] >       "repoTags":  [
	I1014 19:40:33.006555  437269 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.4-0"
	I1014 19:40:33.006563  437269 command_runner.go:130] >       ],
	I1014 19:40:33.006570  437269 command_runner.go:130] >       "repoDigests":  [
	I1014 19:40:33.006584  437269 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f",
	I1014 19:40:33.006598  437269 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"
	I1014 19:40:33.006607  437269 command_runner.go:130] >       ],
	I1014 19:40:33.006615  437269 command_runner.go:130] >       "size":  "195976448",
	I1014 19:40:33.006619  437269 command_runner.go:130] >       "uid":  {
	I1014 19:40:33.006624  437269 command_runner.go:130] >         "value":  "0"
	I1014 19:40:33.006632  437269 command_runner.go:130] >       },
	I1014 19:40:33.006646  437269 command_runner.go:130] >       "username":  "",
	I1014 19:40:33.006657  437269 command_runner.go:130] >       "pinned":  false
	I1014 19:40:33.006667  437269 command_runner.go:130] >     },
	I1014 19:40:33.006675  437269 command_runner.go:130] >     {
	I1014 19:40:33.006689  437269 command_runner.go:130] >       "id":  "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97",
	I1014 19:40:33.006695  437269 command_runner.go:130] >       "repoTags":  [
	I1014 19:40:33.006707  437269 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.34.1"
	I1014 19:40:33.006714  437269 command_runner.go:130] >       ],
	I1014 19:40:33.006718  437269 command_runner.go:130] >       "repoDigests":  [
	I1014 19:40:33.006732  437269 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964",
	I1014 19:40:33.006748  437269 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"
	I1014 19:40:33.006767  437269 command_runner.go:130] >       ],
	I1014 19:40:33.006778  437269 command_runner.go:130] >       "size":  "89046001",
	I1014 19:40:33.006786  437269 command_runner.go:130] >       "uid":  {
	I1014 19:40:33.006795  437269 command_runner.go:130] >         "value":  "0"
	I1014 19:40:33.006803  437269 command_runner.go:130] >       },
	I1014 19:40:33.006809  437269 command_runner.go:130] >       "username":  "",
	I1014 19:40:33.006819  437269 command_runner.go:130] >       "pinned":  false
	I1014 19:40:33.006827  437269 command_runner.go:130] >     },
	I1014 19:40:33.006835  437269 command_runner.go:130] >     {
	I1014 19:40:33.006846  437269 command_runner.go:130] >       "id":  "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f",
	I1014 19:40:33.006855  437269 command_runner.go:130] >       "repoTags":  [
	I1014 19:40:33.006865  437269 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.34.1"
	I1014 19:40:33.006874  437269 command_runner.go:130] >       ],
	I1014 19:40:33.006884  437269 command_runner.go:130] >       "repoDigests":  [
	I1014 19:40:33.006899  437269 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89",
	I1014 19:40:33.006910  437269 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"
	I1014 19:40:33.006918  437269 command_runner.go:130] >       ],
	I1014 19:40:33.006926  437269 command_runner.go:130] >       "size":  "76004181",
	I1014 19:40:33.006935  437269 command_runner.go:130] >       "uid":  {
	I1014 19:40:33.006948  437269 command_runner.go:130] >         "value":  "0"
	I1014 19:40:33.006957  437269 command_runner.go:130] >       },
	I1014 19:40:33.006967  437269 command_runner.go:130] >       "username":  "",
	I1014 19:40:33.006976  437269 command_runner.go:130] >       "pinned":  false
	I1014 19:40:33.006985  437269 command_runner.go:130] >     },
	I1014 19:40:33.006993  437269 command_runner.go:130] >     {
	I1014 19:40:33.007004  437269 command_runner.go:130] >       "id":  "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7",
	I1014 19:40:33.007011  437269 command_runner.go:130] >       "repoTags":  [
	I1014 19:40:33.007019  437269 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.34.1"
	I1014 19:40:33.007027  437269 command_runner.go:130] >       ],
	I1014 19:40:33.007037  437269 command_runner.go:130] >       "repoDigests":  [
	I1014 19:40:33.007052  437269 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a",
	I1014 19:40:33.007067  437269 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"
	I1014 19:40:33.007076  437269 command_runner.go:130] >       ],
	I1014 19:40:33.007084  437269 command_runner.go:130] >       "size":  "73138073",
	I1014 19:40:33.007092  437269 command_runner.go:130] >       "username":  "",
	I1014 19:40:33.007095  437269 command_runner.go:130] >       "pinned":  false
	I1014 19:40:33.007103  437269 command_runner.go:130] >     },
	I1014 19:40:33.007109  437269 command_runner.go:130] >     {
	I1014 19:40:33.007123  437269 command_runner.go:130] >       "id":  "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813",
	I1014 19:40:33.007132  437269 command_runner.go:130] >       "repoTags":  [
	I1014 19:40:33.007142  437269 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.34.1"
	I1014 19:40:33.007152  437269 command_runner.go:130] >       ],
	I1014 19:40:33.007162  437269 command_runner.go:130] >       "repoDigests":  [
	I1014 19:40:33.007175  437269 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31",
	I1014 19:40:33.007194  437269 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"
	I1014 19:40:33.007203  437269 command_runner.go:130] >       ],
	I1014 19:40:33.007213  437269 command_runner.go:130] >       "size":  "53844823",
	I1014 19:40:33.007220  437269 command_runner.go:130] >       "uid":  {
	I1014 19:40:33.007229  437269 command_runner.go:130] >         "value":  "0"
	I1014 19:40:33.007237  437269 command_runner.go:130] >       },
	I1014 19:40:33.007246  437269 command_runner.go:130] >       "username":  "",
	I1014 19:40:33.007253  437269 command_runner.go:130] >       "pinned":  false
	I1014 19:40:33.007260  437269 command_runner.go:130] >     },
	I1014 19:40:33.007266  437269 command_runner.go:130] >     {
	I1014 19:40:33.007278  437269 command_runner.go:130] >       "id":  "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f",
	I1014 19:40:33.007285  437269 command_runner.go:130] >       "repoTags":  [
	I1014 19:40:33.007290  437269 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1014 19:40:33.007298  437269 command_runner.go:130] >       ],
	I1014 19:40:33.007308  437269 command_runner.go:130] >       "repoDigests":  [
	I1014 19:40:33.007320  437269 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1014 19:40:33.007334  437269 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"
	I1014 19:40:33.007342  437269 command_runner.go:130] >       ],
	I1014 19:40:33.007351  437269 command_runner.go:130] >       "size":  "742092",
	I1014 19:40:33.007359  437269 command_runner.go:130] >       "uid":  {
	I1014 19:40:33.007370  437269 command_runner.go:130] >         "value":  "65535"
	I1014 19:40:33.007376  437269 command_runner.go:130] >       },
	I1014 19:40:33.007380  437269 command_runner.go:130] >       "username":  "",
	I1014 19:40:33.007387  437269 command_runner.go:130] >       "pinned":  true
	I1014 19:40:33.007393  437269 command_runner.go:130] >     }
	I1014 19:40:33.007401  437269 command_runner.go:130] >   ]
	I1014 19:40:33.007406  437269 command_runner.go:130] > }
	I1014 19:40:33.007590  437269 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 19:40:33.007603  437269 crio.go:433] Images already preloaded, skipping extraction
	I1014 19:40:33.007661  437269 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 19:40:33.032442  437269 command_runner.go:130] > {
	I1014 19:40:33.032462  437269 command_runner.go:130] >   "images":  [
	I1014 19:40:33.032466  437269 command_runner.go:130] >     {
	I1014 19:40:33.032478  437269 command_runner.go:130] >       "id":  "409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c",
	I1014 19:40:33.032485  437269 command_runner.go:130] >       "repoTags":  [
	I1014 19:40:33.032495  437269 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1014 19:40:33.032501  437269 command_runner.go:130] >       ],
	I1014 19:40:33.032508  437269 command_runner.go:130] >       "repoDigests":  [
	I1014 19:40:33.032519  437269 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1014 19:40:33.032527  437269 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"
	I1014 19:40:33.032534  437269 command_runner.go:130] >       ],
	I1014 19:40:33.032538  437269 command_runner.go:130] >       "size":  "109379124",
	I1014 19:40:33.032542  437269 command_runner.go:130] >       "username":  "",
	I1014 19:40:33.032548  437269 command_runner.go:130] >       "pinned":  false
	I1014 19:40:33.032551  437269 command_runner.go:130] >     },
	I1014 19:40:33.032555  437269 command_runner.go:130] >     {
	I1014 19:40:33.032561  437269 command_runner.go:130] >       "id":  "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1014 19:40:33.032567  437269 command_runner.go:130] >       "repoTags":  [
	I1014 19:40:33.032572  437269 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1014 19:40:33.032575  437269 command_runner.go:130] >       ],
	I1014 19:40:33.032582  437269 command_runner.go:130] >       "repoDigests":  [
	I1014 19:40:33.032591  437269 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1014 19:40:33.032602  437269 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1014 19:40:33.032608  437269 command_runner.go:130] >       ],
	I1014 19:40:33.032612  437269 command_runner.go:130] >       "size":  "31470524",
	I1014 19:40:33.032616  437269 command_runner.go:130] >       "username":  "",
	I1014 19:40:33.032621  437269 command_runner.go:130] >       "pinned":  false
	I1014 19:40:33.032626  437269 command_runner.go:130] >     },
	I1014 19:40:33.032629  437269 command_runner.go:130] >     {
	I1014 19:40:33.032635  437269 command_runner.go:130] >       "id":  "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969",
	I1014 19:40:33.032642  437269 command_runner.go:130] >       "repoTags":  [
	I1014 19:40:33.032647  437269 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.12.1"
	I1014 19:40:33.032652  437269 command_runner.go:130] >       ],
	I1014 19:40:33.032656  437269 command_runner.go:130] >       "repoDigests":  [
	I1014 19:40:33.032665  437269 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998",
	I1014 19:40:33.032675  437269 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"
	I1014 19:40:33.032682  437269 command_runner.go:130] >       ],
	I1014 19:40:33.032686  437269 command_runner.go:130] >       "size":  "76103547",
	I1014 19:40:33.032690  437269 command_runner.go:130] >       "username":  "nonroot",
	I1014 19:40:33.032694  437269 command_runner.go:130] >       "pinned":  false
	I1014 19:40:33.032697  437269 command_runner.go:130] >     },
	I1014 19:40:33.032700  437269 command_runner.go:130] >     {
	I1014 19:40:33.032705  437269 command_runner.go:130] >       "id":  "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115",
	I1014 19:40:33.032709  437269 command_runner.go:130] >       "repoTags":  [
	I1014 19:40:33.032714  437269 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.4-0"
	I1014 19:40:33.032720  437269 command_runner.go:130] >       ],
	I1014 19:40:33.032724  437269 command_runner.go:130] >       "repoDigests":  [
	I1014 19:40:33.032730  437269 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f",
	I1014 19:40:33.032739  437269 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"
	I1014 19:40:33.032743  437269 command_runner.go:130] >       ],
	I1014 19:40:33.032749  437269 command_runner.go:130] >       "size":  "195976448",
	I1014 19:40:33.032772  437269 command_runner.go:130] >       "uid":  {
	I1014 19:40:33.032781  437269 command_runner.go:130] >         "value":  "0"
	I1014 19:40:33.032786  437269 command_runner.go:130] >       },
	I1014 19:40:33.032793  437269 command_runner.go:130] >       "username":  "",
	I1014 19:40:33.032798  437269 command_runner.go:130] >       "pinned":  false
	I1014 19:40:33.032801  437269 command_runner.go:130] >     },
	I1014 19:40:33.032804  437269 command_runner.go:130] >     {
	I1014 19:40:33.032810  437269 command_runner.go:130] >       "id":  "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97",
	I1014 19:40:33.032816  437269 command_runner.go:130] >       "repoTags":  [
	I1014 19:40:33.032821  437269 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.34.1"
	I1014 19:40:33.032827  437269 command_runner.go:130] >       ],
	I1014 19:40:33.032830  437269 command_runner.go:130] >       "repoDigests":  [
	I1014 19:40:33.032837  437269 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964",
	I1014 19:40:33.032847  437269 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"
	I1014 19:40:33.032850  437269 command_runner.go:130] >       ],
	I1014 19:40:33.032858  437269 command_runner.go:130] >       "size":  "89046001",
	I1014 19:40:33.032862  437269 command_runner.go:130] >       "uid":  {
	I1014 19:40:33.032866  437269 command_runner.go:130] >         "value":  "0"
	I1014 19:40:33.032869  437269 command_runner.go:130] >       },
	I1014 19:40:33.032873  437269 command_runner.go:130] >       "username":  "",
	I1014 19:40:33.032877  437269 command_runner.go:130] >       "pinned":  false
	I1014 19:40:33.032880  437269 command_runner.go:130] >     },
	I1014 19:40:33.032883  437269 command_runner.go:130] >     {
	I1014 19:40:33.032889  437269 command_runner.go:130] >       "id":  "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f",
	I1014 19:40:33.032895  437269 command_runner.go:130] >       "repoTags":  [
	I1014 19:40:33.032901  437269 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.34.1"
	I1014 19:40:33.032906  437269 command_runner.go:130] >       ],
	I1014 19:40:33.032910  437269 command_runner.go:130] >       "repoDigests":  [
	I1014 19:40:33.032917  437269 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89",
	I1014 19:40:33.032935  437269 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"
	I1014 19:40:33.032940  437269 command_runner.go:130] >       ],
	I1014 19:40:33.032944  437269 command_runner.go:130] >       "size":  "76004181",
	I1014 19:40:33.032948  437269 command_runner.go:130] >       "uid":  {
	I1014 19:40:33.032955  437269 command_runner.go:130] >         "value":  "0"
	I1014 19:40:33.032958  437269 command_runner.go:130] >       },
	I1014 19:40:33.032963  437269 command_runner.go:130] >       "username":  "",
	I1014 19:40:33.032969  437269 command_runner.go:130] >       "pinned":  false
	I1014 19:40:33.032973  437269 command_runner.go:130] >     },
	I1014 19:40:33.032976  437269 command_runner.go:130] >     {
	I1014 19:40:33.032981  437269 command_runner.go:130] >       "id":  "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7",
	I1014 19:40:33.032986  437269 command_runner.go:130] >       "repoTags":  [
	I1014 19:40:33.032990  437269 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.34.1"
	I1014 19:40:33.032996  437269 command_runner.go:130] >       ],
	I1014 19:40:33.033000  437269 command_runner.go:130] >       "repoDigests":  [
	I1014 19:40:33.033009  437269 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a",
	I1014 19:40:33.033018  437269 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"
	I1014 19:40:33.033023  437269 command_runner.go:130] >       ],
	I1014 19:40:33.033027  437269 command_runner.go:130] >       "size":  "73138073",
	I1014 19:40:33.033033  437269 command_runner.go:130] >       "username":  "",
	I1014 19:40:33.033037  437269 command_runner.go:130] >       "pinned":  false
	I1014 19:40:33.033042  437269 command_runner.go:130] >     },
	I1014 19:40:33.033045  437269 command_runner.go:130] >     {
	I1014 19:40:33.033051  437269 command_runner.go:130] >       "id":  "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813",
	I1014 19:40:33.033055  437269 command_runner.go:130] >       "repoTags":  [
	I1014 19:40:33.033059  437269 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.34.1"
	I1014 19:40:33.033062  437269 command_runner.go:130] >       ],
	I1014 19:40:33.033066  437269 command_runner.go:130] >       "repoDigests":  [
	I1014 19:40:33.033073  437269 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31",
	I1014 19:40:33.033115  437269 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"
	I1014 19:40:33.033125  437269 command_runner.go:130] >       ],
	I1014 19:40:33.033129  437269 command_runner.go:130] >       "size":  "53844823",
	I1014 19:40:33.033133  437269 command_runner.go:130] >       "uid":  {
	I1014 19:40:33.033139  437269 command_runner.go:130] >         "value":  "0"
	I1014 19:40:33.033142  437269 command_runner.go:130] >       },
	I1014 19:40:33.033146  437269 command_runner.go:130] >       "username":  "",
	I1014 19:40:33.033150  437269 command_runner.go:130] >       "pinned":  false
	I1014 19:40:33.033153  437269 command_runner.go:130] >     },
	I1014 19:40:33.033157  437269 command_runner.go:130] >     {
	I1014 19:40:33.033166  437269 command_runner.go:130] >       "id":  "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f",
	I1014 19:40:33.033170  437269 command_runner.go:130] >       "repoTags":  [
	I1014 19:40:33.033175  437269 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1014 19:40:33.033180  437269 command_runner.go:130] >       ],
	I1014 19:40:33.033184  437269 command_runner.go:130] >       "repoDigests":  [
	I1014 19:40:33.033194  437269 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1014 19:40:33.033201  437269 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"
	I1014 19:40:33.033207  437269 command_runner.go:130] >       ],
	I1014 19:40:33.033210  437269 command_runner.go:130] >       "size":  "742092",
	I1014 19:40:33.033214  437269 command_runner.go:130] >       "uid":  {
	I1014 19:40:33.033217  437269 command_runner.go:130] >         "value":  "65535"
	I1014 19:40:33.033221  437269 command_runner.go:130] >       },
	I1014 19:40:33.033227  437269 command_runner.go:130] >       "username":  "",
	I1014 19:40:33.033231  437269 command_runner.go:130] >       "pinned":  true
	I1014 19:40:33.033234  437269 command_runner.go:130] >     }
	I1014 19:40:33.033237  437269 command_runner.go:130] >   ]
	I1014 19:40:33.033243  437269 command_runner.go:130] > }
	I1014 19:40:33.033339  437269 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 19:40:33.033350  437269 cache_images.go:85] Images are preloaded, skipping loading
	I1014 19:40:33.033357  437269 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1014 19:40:33.033466  437269 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-744288 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-744288 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 19:40:33.033525  437269 ssh_runner.go:195] Run: crio config
	I1014 19:40:33.060289  437269 command_runner.go:130] ! time="2025-10-14T19:40:33.059904069Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1014 19:40:33.060322  437269 command_runner.go:130] ! time="2025-10-14T19:40:33.059934761Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1014 19:40:33.060333  437269 command_runner.go:130] ! time="2025-10-14T19:40:33.05995717Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1014 19:40:33.060344  437269 command_runner.go:130] ! time="2025-10-14T19:40:33.059977069Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1014 19:40:33.060356  437269 command_runner.go:130] ! time="2025-10-14T19:40:33.060036887Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1014 19:40:33.060415  437269 command_runner.go:130] ! time="2025-10-14T19:40:33.060204237Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1014 19:40:33.072518  437269 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1014 19:40:33.078451  437269 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1014 19:40:33.078471  437269 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1014 19:40:33.078478  437269 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1014 19:40:33.078485  437269 command_runner.go:130] > #
	I1014 19:40:33.078491  437269 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1014 19:40:33.078497  437269 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1014 19:40:33.078504  437269 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1014 19:40:33.078513  437269 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1014 19:40:33.078518  437269 command_runner.go:130] > # reload'.
	I1014 19:40:33.078524  437269 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1014 19:40:33.078533  437269 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1014 19:40:33.078539  437269 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1014 19:40:33.078545  437269 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1014 19:40:33.078551  437269 command_runner.go:130] > [crio]
	I1014 19:40:33.078557  437269 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1014 19:40:33.078564  437269 command_runner.go:130] > # containers images, in this directory.
	I1014 19:40:33.078572  437269 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1014 19:40:33.078580  437269 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1014 19:40:33.078585  437269 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1014 19:40:33.078594  437269 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1014 19:40:33.078601  437269 command_runner.go:130] > # imagestore = ""
	I1014 19:40:33.078607  437269 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1014 19:40:33.078615  437269 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1014 19:40:33.078620  437269 command_runner.go:130] > # storage_driver = "overlay"
	I1014 19:40:33.078625  437269 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1014 19:40:33.078633  437269 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1014 19:40:33.078637  437269 command_runner.go:130] > # storage_option = [
	I1014 19:40:33.078642  437269 command_runner.go:130] > # ]
	I1014 19:40:33.078648  437269 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1014 19:40:33.078656  437269 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1014 19:40:33.078660  437269 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1014 19:40:33.078667  437269 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1014 19:40:33.078673  437269 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1014 19:40:33.078690  437269 command_runner.go:130] > # always happen on a node reboot
	I1014 19:40:33.078695  437269 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1014 19:40:33.078703  437269 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1014 19:40:33.078709  437269 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1014 19:40:33.078716  437269 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1014 19:40:33.078720  437269 command_runner.go:130] > # version_file_persist = ""
	I1014 19:40:33.078729  437269 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1014 19:40:33.078739  437269 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1014 19:40:33.078745  437269 command_runner.go:130] > # internal_wipe = true
	I1014 19:40:33.078771  437269 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1014 19:40:33.078784  437269 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1014 19:40:33.078790  437269 command_runner.go:130] > # internal_repair = true
	I1014 19:40:33.078798  437269 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1014 19:40:33.078804  437269 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1014 19:40:33.078816  437269 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1014 19:40:33.078823  437269 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1014 19:40:33.078829  437269 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1014 19:40:33.078834  437269 command_runner.go:130] > [crio.api]
	I1014 19:40:33.078839  437269 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1014 19:40:33.078846  437269 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1014 19:40:33.078851  437269 command_runner.go:130] > # IP address on which the stream server will listen.
	I1014 19:40:33.078858  437269 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1014 19:40:33.078864  437269 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1014 19:40:33.078871  437269 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1014 19:40:33.078875  437269 command_runner.go:130] > # stream_port = "0"
	I1014 19:40:33.078881  437269 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1014 19:40:33.078885  437269 command_runner.go:130] > # stream_enable_tls = false
	I1014 19:40:33.078893  437269 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1014 19:40:33.078897  437269 command_runner.go:130] > # stream_idle_timeout = ""
	I1014 19:40:33.078904  437269 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1014 19:40:33.078912  437269 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1014 19:40:33.078916  437269 command_runner.go:130] > # stream_tls_cert = ""
	I1014 19:40:33.078924  437269 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1014 19:40:33.078931  437269 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1014 19:40:33.078936  437269 command_runner.go:130] > # stream_tls_key = ""
	I1014 19:40:33.078941  437269 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1014 19:40:33.078949  437269 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1014 19:40:33.078954  437269 command_runner.go:130] > # automatically pick up the changes.
	I1014 19:40:33.078960  437269 command_runner.go:130] > # stream_tls_ca = ""
	I1014 19:40:33.078977  437269 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1014 19:40:33.078984  437269 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1014 19:40:33.078991  437269 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1014 19:40:33.078998  437269 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1014 19:40:33.079004  437269 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1014 19:40:33.079011  437269 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1014 19:40:33.079015  437269 command_runner.go:130] > [crio.runtime]
	I1014 19:40:33.079021  437269 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1014 19:40:33.079028  437269 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1014 19:40:33.079032  437269 command_runner.go:130] > # "nofile=1024:2048"
	I1014 19:40:33.079040  437269 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1014 19:40:33.079046  437269 command_runner.go:130] > # default_ulimits = [
	I1014 19:40:33.079049  437269 command_runner.go:130] > # ]
	I1014 19:40:33.079054  437269 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1014 19:40:33.079060  437269 command_runner.go:130] > # no_pivot = false
	I1014 19:40:33.079065  437269 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1014 19:40:33.079073  437269 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1014 19:40:33.079078  437269 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1014 19:40:33.079086  437269 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1014 19:40:33.079090  437269 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1014 19:40:33.079099  437269 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1014 19:40:33.079105  437269 command_runner.go:130] > # conmon = ""
	I1014 19:40:33.079109  437269 command_runner.go:130] > # Cgroup setting for conmon
	I1014 19:40:33.079117  437269 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1014 19:40:33.079123  437269 command_runner.go:130] > conmon_cgroup = "pod"
	I1014 19:40:33.079129  437269 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1014 19:40:33.079136  437269 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1014 19:40:33.079142  437269 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1014 19:40:33.079147  437269 command_runner.go:130] > # conmon_env = [
	I1014 19:40:33.079150  437269 command_runner.go:130] > # ]
	I1014 19:40:33.079155  437269 command_runner.go:130] > # Additional environment variables to set for all the
	I1014 19:40:33.079163  437269 command_runner.go:130] > # containers. These are overridden if set in the
	I1014 19:40:33.079169  437269 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1014 19:40:33.079175  437269 command_runner.go:130] > # default_env = [
	I1014 19:40:33.079177  437269 command_runner.go:130] > # ]
	I1014 19:40:33.079183  437269 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1014 19:40:33.079192  437269 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1014 19:40:33.079198  437269 command_runner.go:130] > # selinux = false
	I1014 19:40:33.079204  437269 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1014 19:40:33.079210  437269 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1014 19:40:33.079219  437269 command_runner.go:130] > # This option supports live configuration reload.
	I1014 19:40:33.079225  437269 command_runner.go:130] > # seccomp_profile = ""
	I1014 19:40:33.079231  437269 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1014 19:40:33.079237  437269 command_runner.go:130] > # This option supports live configuration reload.
	I1014 19:40:33.079242  437269 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1014 19:40:33.079250  437269 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1014 19:40:33.079258  437269 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1014 19:40:33.079264  437269 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1014 19:40:33.079273  437269 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1014 19:40:33.079279  437269 command_runner.go:130] > # This option supports live configuration reload.
	I1014 19:40:33.079284  437269 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1014 19:40:33.079291  437269 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1014 19:40:33.079295  437269 command_runner.go:130] > # the cgroup blockio controller.
	I1014 19:40:33.079301  437269 command_runner.go:130] > # blockio_config_file = ""
	I1014 19:40:33.079308  437269 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1014 19:40:33.079314  437269 command_runner.go:130] > # blockio parameters.
	I1014 19:40:33.079317  437269 command_runner.go:130] > # blockio_reload = false
	I1014 19:40:33.079325  437269 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1014 19:40:33.079329  437269 command_runner.go:130] > # irqbalance daemon.
	I1014 19:40:33.079336  437269 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1014 19:40:33.079342  437269 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1014 19:40:33.079351  437269 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1014 19:40:33.079360  437269 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1014 19:40:33.079367  437269 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1014 19:40:33.079374  437269 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1014 19:40:33.079380  437269 command_runner.go:130] > # This option supports live configuration reload.
	I1014 19:40:33.079385  437269 command_runner.go:130] > # rdt_config_file = ""
	I1014 19:40:33.079393  437269 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1014 19:40:33.079396  437269 command_runner.go:130] > # cgroup_manager = "systemd"
	I1014 19:40:33.079402  437269 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1014 19:40:33.079407  437269 command_runner.go:130] > # separate_pull_cgroup = ""
	I1014 19:40:33.079413  437269 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1014 19:40:33.079421  437269 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1014 19:40:33.079427  437269 command_runner.go:130] > # will be added.
	I1014 19:40:33.079430  437269 command_runner.go:130] > # default_capabilities = [
	I1014 19:40:33.079433  437269 command_runner.go:130] > # 	"CHOWN",
	I1014 19:40:33.079439  437269 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1014 19:40:33.079442  437269 command_runner.go:130] > # 	"FSETID",
	I1014 19:40:33.079445  437269 command_runner.go:130] > # 	"FOWNER",
	I1014 19:40:33.079451  437269 command_runner.go:130] > # 	"SETGID",
	I1014 19:40:33.079466  437269 command_runner.go:130] > # 	"SETUID",
	I1014 19:40:33.079472  437269 command_runner.go:130] > # 	"SETPCAP",
	I1014 19:40:33.079475  437269 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1014 19:40:33.079480  437269 command_runner.go:130] > # 	"KILL",
	I1014 19:40:33.079484  437269 command_runner.go:130] > # ]
	I1014 19:40:33.079493  437269 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1014 19:40:33.079501  437269 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1014 19:40:33.079508  437269 command_runner.go:130] > # add_inheritable_capabilities = false
	I1014 19:40:33.079514  437269 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1014 19:40:33.079522  437269 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1014 19:40:33.079526  437269 command_runner.go:130] > default_sysctls = [
	I1014 19:40:33.079530  437269 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1014 19:40:33.079536  437269 command_runner.go:130] > ]
	I1014 19:40:33.079540  437269 command_runner.go:130] > # List of devices on the host that a
	I1014 19:40:33.079548  437269 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1014 19:40:33.079553  437269 command_runner.go:130] > # allowed_devices = [
	I1014 19:40:33.079557  437269 command_runner.go:130] > # 	"/dev/fuse",
	I1014 19:40:33.079563  437269 command_runner.go:130] > # 	"/dev/net/tun",
	I1014 19:40:33.079566  437269 command_runner.go:130] > # ]
	I1014 19:40:33.079574  437269 command_runner.go:130] > # List of additional devices. specified as
	I1014 19:40:33.079581  437269 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1014 19:40:33.079588  437269 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1014 19:40:33.079595  437269 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1014 19:40:33.079601  437269 command_runner.go:130] > # additional_devices = [
	I1014 19:40:33.079604  437269 command_runner.go:130] > # ]
	I1014 19:40:33.079611  437269 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1014 19:40:33.079615  437269 command_runner.go:130] > # cdi_spec_dirs = [
	I1014 19:40:33.079619  437269 command_runner.go:130] > # 	"/etc/cdi",
	I1014 19:40:33.079625  437269 command_runner.go:130] > # 	"/var/run/cdi",
	I1014 19:40:33.079628  437269 command_runner.go:130] > # ]
	I1014 19:40:33.079633  437269 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1014 19:40:33.079641  437269 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1014 19:40:33.079645  437269 command_runner.go:130] > # Defaults to false.
	I1014 19:40:33.079652  437269 command_runner.go:130] > # device_ownership_from_security_context = false
	I1014 19:40:33.079659  437269 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1014 19:40:33.079666  437269 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1014 19:40:33.079670  437269 command_runner.go:130] > # hooks_dir = [
	I1014 19:40:33.079682  437269 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1014 19:40:33.079687  437269 command_runner.go:130] > # ]
	I1014 19:40:33.079693  437269 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1014 19:40:33.079701  437269 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1014 19:40:33.079706  437269 command_runner.go:130] > # its default mounts from the following two files:
	I1014 19:40:33.079712  437269 command_runner.go:130] > #
	I1014 19:40:33.079718  437269 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1014 19:40:33.079726  437269 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1014 19:40:33.079734  437269 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1014 19:40:33.079737  437269 command_runner.go:130] > #
	I1014 19:40:33.079743  437269 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1014 19:40:33.079751  437269 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1014 19:40:33.079780  437269 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1014 19:40:33.079788  437269 command_runner.go:130] > #      only add mounts it finds in this file.
	I1014 19:40:33.079791  437269 command_runner.go:130] > #
	I1014 19:40:33.079797  437269 command_runner.go:130] > # default_mounts_file = ""
	I1014 19:40:33.079804  437269 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1014 19:40:33.079811  437269 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1014 19:40:33.079816  437269 command_runner.go:130] > # pids_limit = -1
	I1014 19:40:33.079822  437269 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1014 19:40:33.079830  437269 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1014 19:40:33.079839  437269 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1014 19:40:33.079846  437269 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1014 19:40:33.079852  437269 command_runner.go:130] > # log_size_max = -1
	I1014 19:40:33.079858  437269 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1014 19:40:33.079864  437269 command_runner.go:130] > # log_to_journald = false
	I1014 19:40:33.079870  437269 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1014 19:40:33.079878  437269 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1014 19:40:33.079883  437269 command_runner.go:130] > # Path to directory for container attach sockets.
	I1014 19:40:33.079890  437269 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1014 19:40:33.079895  437269 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1014 19:40:33.079901  437269 command_runner.go:130] > # bind_mount_prefix = ""
	I1014 19:40:33.079906  437269 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1014 19:40:33.079912  437269 command_runner.go:130] > # read_only = false
	I1014 19:40:33.079917  437269 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1014 19:40:33.079926  437269 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1014 19:40:33.079933  437269 command_runner.go:130] > # live configuration reload.
	I1014 19:40:33.079937  437269 command_runner.go:130] > # log_level = "info"
	I1014 19:40:33.079942  437269 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1014 19:40:33.079950  437269 command_runner.go:130] > # This option supports live configuration reload.
	I1014 19:40:33.079953  437269 command_runner.go:130] > # log_filter = ""
	I1014 19:40:33.079959  437269 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1014 19:40:33.079967  437269 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1014 19:40:33.079970  437269 command_runner.go:130] > # separated by comma.
	I1014 19:40:33.079978  437269 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1014 19:40:33.079983  437269 command_runner.go:130] > # uid_mappings = ""
	I1014 19:40:33.079989  437269 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1014 19:40:33.079997  437269 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1014 19:40:33.080005  437269 command_runner.go:130] > # separated by comma.
	I1014 19:40:33.080014  437269 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1014 19:40:33.080020  437269 command_runner.go:130] > # gid_mappings = ""
	I1014 19:40:33.080026  437269 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1014 19:40:33.080035  437269 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1014 19:40:33.080043  437269 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1014 19:40:33.080049  437269 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1014 19:40:33.080055  437269 command_runner.go:130] > # minimum_mappable_uid = -1
	I1014 19:40:33.080061  437269 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1014 19:40:33.080069  437269 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1014 19:40:33.080075  437269 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1014 19:40:33.080085  437269 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1014 19:40:33.080090  437269 command_runner.go:130] > # minimum_mappable_gid = -1
	I1014 19:40:33.080096  437269 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1014 19:40:33.080112  437269 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1014 19:40:33.080120  437269 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1014 19:40:33.080124  437269 command_runner.go:130] > # ctr_stop_timeout = 30
	I1014 19:40:33.080131  437269 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1014 19:40:33.080138  437269 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1014 19:40:33.080144  437269 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1014 19:40:33.080149  437269 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1014 19:40:33.080155  437269 command_runner.go:130] > # drop_infra_ctr = true
	I1014 19:40:33.080160  437269 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1014 19:40:33.080168  437269 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1014 19:40:33.080175  437269 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1014 19:40:33.080181  437269 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1014 19:40:33.080188  437269 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1014 19:40:33.080195  437269 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1014 19:40:33.080200  437269 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1014 19:40:33.080207  437269 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1014 19:40:33.080211  437269 command_runner.go:130] > # shared_cpuset = ""
	I1014 19:40:33.080219  437269 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1014 19:40:33.080223  437269 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1014 19:40:33.080230  437269 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1014 19:40:33.080237  437269 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1014 19:40:33.080243  437269 command_runner.go:130] > # pinns_path = ""
	I1014 19:40:33.080249  437269 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1014 19:40:33.080256  437269 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1014 19:40:33.080261  437269 command_runner.go:130] > # enable_criu_support = true
	I1014 19:40:33.080268  437269 command_runner.go:130] > # Enable/disable the generation of the container,
	I1014 19:40:33.080273  437269 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1014 19:40:33.080280  437269 command_runner.go:130] > # enable_pod_events = false
	I1014 19:40:33.080285  437269 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1014 19:40:33.080292  437269 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1014 19:40:33.080296  437269 command_runner.go:130] > # default_runtime = "crun"
	I1014 19:40:33.080301  437269 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1014 19:40:33.080310  437269 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1014 19:40:33.080320  437269 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1014 19:40:33.080325  437269 command_runner.go:130] > # creation as a file is not desired either.
	I1014 19:40:33.080336  437269 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1014 19:40:33.080342  437269 command_runner.go:130] > # the hostname is being managed dynamically.
	I1014 19:40:33.080346  437269 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1014 19:40:33.080352  437269 command_runner.go:130] > # ]
	I1014 19:40:33.080357  437269 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1014 19:40:33.080365  437269 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1014 19:40:33.080373  437269 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1014 19:40:33.080378  437269 command_runner.go:130] > # Each entry in the table should follow the format:
	I1014 19:40:33.080382  437269 command_runner.go:130] > #
	I1014 19:40:33.080387  437269 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1014 19:40:33.080394  437269 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1014 19:40:33.080397  437269 command_runner.go:130] > # runtime_type = "oci"
	I1014 19:40:33.080404  437269 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1014 19:40:33.080408  437269 command_runner.go:130] > # inherit_default_runtime = false
	I1014 19:40:33.080413  437269 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1014 19:40:33.080419  437269 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1014 19:40:33.080424  437269 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1014 19:40:33.080430  437269 command_runner.go:130] > # monitor_env = []
	I1014 19:40:33.080435  437269 command_runner.go:130] > # privileged_without_host_devices = false
	I1014 19:40:33.080440  437269 command_runner.go:130] > # allowed_annotations = []
	I1014 19:40:33.080445  437269 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1014 19:40:33.080451  437269 command_runner.go:130] > # no_sync_log = false
	I1014 19:40:33.080455  437269 command_runner.go:130] > # default_annotations = {}
	I1014 19:40:33.080461  437269 command_runner.go:130] > # stream_websockets = false
	I1014 19:40:33.080465  437269 command_runner.go:130] > # seccomp_profile = ""
	I1014 19:40:33.080487  437269 command_runner.go:130] > # Where:
	I1014 19:40:33.080494  437269 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1014 19:40:33.080500  437269 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1014 19:40:33.080508  437269 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1014 19:40:33.080514  437269 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1014 19:40:33.080519  437269 command_runner.go:130] > #   in $PATH.
	I1014 19:40:33.080525  437269 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1014 19:40:33.080532  437269 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1014 19:40:33.080538  437269 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1014 19:40:33.080543  437269 command_runner.go:130] > #   state.
	I1014 19:40:33.080552  437269 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1014 19:40:33.080560  437269 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1014 19:40:33.080565  437269 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1014 19:40:33.080573  437269 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1014 19:40:33.080578  437269 command_runner.go:130] > #   the values from the default runtime on load time.
	I1014 19:40:33.080586  437269 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1014 19:40:33.080591  437269 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1014 19:40:33.080599  437269 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1014 19:40:33.080605  437269 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1014 19:40:33.080612  437269 command_runner.go:130] > #   The currently recognized values are:
	I1014 19:40:33.080618  437269 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1014 19:40:33.080627  437269 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1014 19:40:33.080636  437269 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1014 19:40:33.080641  437269 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1014 19:40:33.080651  437269 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1014 19:40:33.080660  437269 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1014 19:40:33.080669  437269 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1014 19:40:33.080680  437269 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1014 19:40:33.080687  437269 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1014 19:40:33.080693  437269 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1014 19:40:33.080702  437269 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1014 19:40:33.080710  437269 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1014 19:40:33.080715  437269 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1014 19:40:33.080724  437269 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1014 19:40:33.080732  437269 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1014 19:40:33.080738  437269 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1014 19:40:33.080747  437269 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1014 19:40:33.080751  437269 command_runner.go:130] > #   deprecated option "conmon".
	I1014 19:40:33.080773  437269 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1014 19:40:33.080783  437269 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1014 19:40:33.080796  437269 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1014 19:40:33.080803  437269 command_runner.go:130] > #   should be moved to the container's cgroup
	I1014 19:40:33.080810  437269 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1014 19:40:33.080817  437269 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1014 19:40:33.080824  437269 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1014 19:40:33.080830  437269 command_runner.go:130] > #   conmon-rs by using:
	I1014 19:40:33.080837  437269 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1014 19:40:33.080847  437269 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1014 19:40:33.080857  437269 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1014 19:40:33.080865  437269 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1014 19:40:33.080872  437269 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1014 19:40:33.080879  437269 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1014 19:40:33.080888  437269 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1014 19:40:33.080894  437269 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1014 19:40:33.080904  437269 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1014 19:40:33.080915  437269 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1014 19:40:33.080921  437269 command_runner.go:130] > #   when a machine crash happens.
	I1014 19:40:33.080929  437269 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1014 19:40:33.080939  437269 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1014 19:40:33.080949  437269 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1014 19:40:33.080955  437269 command_runner.go:130] > #   seccomp profile for the runtime.
	I1014 19:40:33.080961  437269 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1014 19:40:33.080970  437269 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1014 19:40:33.080975  437269 command_runner.go:130] > #
	I1014 19:40:33.080980  437269 command_runner.go:130] > # Using the seccomp notifier feature:
	I1014 19:40:33.080985  437269 command_runner.go:130] > #
	I1014 19:40:33.080991  437269 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1014 19:40:33.080998  437269 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1014 19:40:33.081002  437269 command_runner.go:130] > #
	I1014 19:40:33.081007  437269 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1014 19:40:33.081015  437269 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1014 19:40:33.081020  437269 command_runner.go:130] > #
	I1014 19:40:33.081026  437269 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1014 19:40:33.081032  437269 command_runner.go:130] > # feature.
	I1014 19:40:33.081035  437269 command_runner.go:130] > #
	I1014 19:40:33.081042  437269 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1014 19:40:33.081048  437269 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1014 19:40:33.081057  437269 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1014 19:40:33.081062  437269 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1014 19:40:33.081070  437269 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1014 19:40:33.081073  437269 command_runner.go:130] > #
	I1014 19:40:33.081079  437269 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1014 19:40:33.081087  437269 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1014 19:40:33.081090  437269 command_runner.go:130] > #
	I1014 19:40:33.081096  437269 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1014 19:40:33.081103  437269 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1014 19:40:33.081106  437269 command_runner.go:130] > #
	I1014 19:40:33.081112  437269 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1014 19:40:33.081119  437269 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1014 19:40:33.081122  437269 command_runner.go:130] > # limitation.
	I1014 19:40:33.081129  437269 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1014 19:40:33.081138  437269 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1014 19:40:33.081143  437269 command_runner.go:130] > runtime_type = ""
	I1014 19:40:33.081147  437269 command_runner.go:130] > runtime_root = "/run/crun"
	I1014 19:40:33.081151  437269 command_runner.go:130] > inherit_default_runtime = false
	I1014 19:40:33.081157  437269 command_runner.go:130] > runtime_config_path = ""
	I1014 19:40:33.081161  437269 command_runner.go:130] > container_min_memory = ""
	I1014 19:40:33.081167  437269 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1014 19:40:33.081171  437269 command_runner.go:130] > monitor_cgroup = "pod"
	I1014 19:40:33.081177  437269 command_runner.go:130] > monitor_exec_cgroup = ""
	I1014 19:40:33.081181  437269 command_runner.go:130] > allowed_annotations = [
	I1014 19:40:33.081187  437269 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1014 19:40:33.081190  437269 command_runner.go:130] > ]
	I1014 19:40:33.081197  437269 command_runner.go:130] > privileged_without_host_devices = false
	I1014 19:40:33.081201  437269 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1014 19:40:33.081208  437269 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1014 19:40:33.081212  437269 command_runner.go:130] > runtime_type = ""
	I1014 19:40:33.081218  437269 command_runner.go:130] > runtime_root = "/run/runc"
	I1014 19:40:33.081222  437269 command_runner.go:130] > inherit_default_runtime = false
	I1014 19:40:33.081229  437269 command_runner.go:130] > runtime_config_path = ""
	I1014 19:40:33.081234  437269 command_runner.go:130] > container_min_memory = ""
	I1014 19:40:33.081241  437269 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1014 19:40:33.081245  437269 command_runner.go:130] > monitor_cgroup = "pod"
	I1014 19:40:33.081251  437269 command_runner.go:130] > monitor_exec_cgroup = ""
	I1014 19:40:33.081256  437269 command_runner.go:130] > privileged_without_host_devices = false
	I1014 19:40:33.081264  437269 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1014 19:40:33.081271  437269 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1014 19:40:33.081277  437269 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1014 19:40:33.081286  437269 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1014 19:40:33.081298  437269 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1014 19:40:33.081309  437269 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1014 19:40:33.081318  437269 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1014 19:40:33.081324  437269 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1014 19:40:33.081335  437269 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1014 19:40:33.081345  437269 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1014 19:40:33.081353  437269 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1014 19:40:33.081359  437269 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1014 19:40:33.081365  437269 command_runner.go:130] > # Example:
	I1014 19:40:33.081369  437269 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1014 19:40:33.081375  437269 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1014 19:40:33.081380  437269 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1014 19:40:33.081389  437269 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1014 19:40:33.081395  437269 command_runner.go:130] > # cpuset = "0-1"
	I1014 19:40:33.081399  437269 command_runner.go:130] > # cpushares = "5"
	I1014 19:40:33.081405  437269 command_runner.go:130] > # cpuquota = "1000"
	I1014 19:40:33.081408  437269 command_runner.go:130] > # cpuperiod = "100000"
	I1014 19:40:33.081412  437269 command_runner.go:130] > # cpulimit = "35"
	I1014 19:40:33.081417  437269 command_runner.go:130] > # Where:
	I1014 19:40:33.081421  437269 command_runner.go:130] > # The workload name is workload-type.
	I1014 19:40:33.081430  437269 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1014 19:40:33.081438  437269 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1014 19:40:33.081443  437269 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1014 19:40:33.081453  437269 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1014 19:40:33.081470  437269 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1014 19:40:33.081477  437269 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1014 19:40:33.081484  437269 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1014 19:40:33.081490  437269 command_runner.go:130] > # Default value is set to true
	I1014 19:40:33.081494  437269 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1014 19:40:33.081499  437269 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1014 19:40:33.081505  437269 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1014 19:40:33.081510  437269 command_runner.go:130] > # Default value is set to 'false'
	I1014 19:40:33.081516  437269 command_runner.go:130] > # disable_hostport_mapping = false
	I1014 19:40:33.081522  437269 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1014 19:40:33.081531  437269 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1014 19:40:33.081537  437269 command_runner.go:130] > # timezone = ""
	I1014 19:40:33.081543  437269 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1014 19:40:33.081549  437269 command_runner.go:130] > #
	I1014 19:40:33.081555  437269 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1014 19:40:33.081563  437269 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1014 19:40:33.081567  437269 command_runner.go:130] > [crio.image]
	I1014 19:40:33.081575  437269 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1014 19:40:33.081579  437269 command_runner.go:130] > # default_transport = "docker://"
	I1014 19:40:33.081585  437269 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1014 19:40:33.081593  437269 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1014 19:40:33.081597  437269 command_runner.go:130] > # global_auth_file = ""
	I1014 19:40:33.081604  437269 command_runner.go:130] > # The image used to instantiate infra containers.
	I1014 19:40:33.081609  437269 command_runner.go:130] > # This option supports live configuration reload.
	I1014 19:40:33.081616  437269 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1014 19:40:33.081622  437269 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1014 19:40:33.081630  437269 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1014 19:40:33.081634  437269 command_runner.go:130] > # This option supports live configuration reload.
	I1014 19:40:33.081639  437269 command_runner.go:130] > # pause_image_auth_file = ""
	I1014 19:40:33.081645  437269 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1014 19:40:33.081653  437269 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1014 19:40:33.081658  437269 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1014 19:40:33.081666  437269 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1014 19:40:33.081671  437269 command_runner.go:130] > # pause_command = "/pause"
	I1014 19:40:33.081682  437269 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1014 19:40:33.081690  437269 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1014 19:40:33.081695  437269 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1014 19:40:33.081703  437269 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1014 19:40:33.081709  437269 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1014 19:40:33.081717  437269 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1014 19:40:33.081723  437269 command_runner.go:130] > # pinned_images = [
	I1014 19:40:33.081725  437269 command_runner.go:130] > # ]
	I1014 19:40:33.081731  437269 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1014 19:40:33.081739  437269 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1014 19:40:33.081745  437269 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1014 19:40:33.081762  437269 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1014 19:40:33.081774  437269 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1014 19:40:33.081781  437269 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1014 19:40:33.081789  437269 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1014 19:40:33.081795  437269 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1014 19:40:33.081804  437269 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1014 19:40:33.081813  437269 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1014 19:40:33.081822  437269 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1014 19:40:33.081833  437269 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1014 19:40:33.081841  437269 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1014 19:40:33.081847  437269 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1014 19:40:33.081853  437269 command_runner.go:130] > # changing them here.
	I1014 19:40:33.081859  437269 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1014 19:40:33.081865  437269 command_runner.go:130] > # insecure_registries = [
	I1014 19:40:33.081868  437269 command_runner.go:130] > # ]
	I1014 19:40:33.081877  437269 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1014 19:40:33.081887  437269 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1014 19:40:33.081893  437269 command_runner.go:130] > # image_volumes = "mkdir"
	I1014 19:40:33.081898  437269 command_runner.go:130] > # Temporary directory to use for storing big files
	I1014 19:40:33.081904  437269 command_runner.go:130] > # big_files_temporary_dir = ""
	I1014 19:40:33.081910  437269 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1014 19:40:33.081918  437269 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1014 19:40:33.081925  437269 command_runner.go:130] > # auto_reload_registries = false
	I1014 19:40:33.081932  437269 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1014 19:40:33.081940  437269 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1014 19:40:33.081947  437269 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1014 19:40:33.081951  437269 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1014 19:40:33.081958  437269 command_runner.go:130] > # The mode of short name resolution.
	I1014 19:40:33.081966  437269 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1014 19:40:33.081977  437269 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1014 19:40:33.081984  437269 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1014 19:40:33.081989  437269 command_runner.go:130] > # short_name_mode = "enforcing"
	I1014 19:40:33.081997  437269 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1014 19:40:33.082002  437269 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1014 19:40:33.082009  437269 command_runner.go:130] > # oci_artifact_mount_support = true
	I1014 19:40:33.082015  437269 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1014 19:40:33.082021  437269 command_runner.go:130] > # CNI plugins.
	I1014 19:40:33.082025  437269 command_runner.go:130] > [crio.network]
	I1014 19:40:33.082033  437269 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1014 19:40:33.082040  437269 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1014 19:40:33.082044  437269 command_runner.go:130] > # cni_default_network = ""
	I1014 19:40:33.082052  437269 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1014 19:40:33.082056  437269 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1014 19:40:33.082064  437269 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1014 19:40:33.082068  437269 command_runner.go:130] > # plugin_dirs = [
	I1014 19:40:33.082071  437269 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1014 19:40:33.082074  437269 command_runner.go:130] > # ]
	I1014 19:40:33.082078  437269 command_runner.go:130] > # List of included pod metrics.
	I1014 19:40:33.082082  437269 command_runner.go:130] > # included_pod_metrics = [
	I1014 19:40:33.082085  437269 command_runner.go:130] > # ]
	I1014 19:40:33.082089  437269 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1014 19:40:33.082092  437269 command_runner.go:130] > [crio.metrics]
	I1014 19:40:33.082097  437269 command_runner.go:130] > # Globally enable or disable metrics support.
	I1014 19:40:33.082100  437269 command_runner.go:130] > # enable_metrics = false
	I1014 19:40:33.082104  437269 command_runner.go:130] > # Specify enabled metrics collectors.
	I1014 19:40:33.082108  437269 command_runner.go:130] > # Per default all metrics are enabled.
	I1014 19:40:33.082114  437269 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1014 19:40:33.082119  437269 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1014 19:40:33.082124  437269 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1014 19:40:33.082128  437269 command_runner.go:130] > # metrics_collectors = [
	I1014 19:40:33.082131  437269 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1014 19:40:33.082135  437269 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1014 19:40:33.082139  437269 command_runner.go:130] > # 	"containers_oom_total",
	I1014 19:40:33.082142  437269 command_runner.go:130] > # 	"processes_defunct",
	I1014 19:40:33.082146  437269 command_runner.go:130] > # 	"operations_total",
	I1014 19:40:33.082150  437269 command_runner.go:130] > # 	"operations_latency_seconds",
	I1014 19:40:33.082154  437269 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1014 19:40:33.082157  437269 command_runner.go:130] > # 	"operations_errors_total",
	I1014 19:40:33.082162  437269 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1014 19:40:33.082169  437269 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1014 19:40:33.082173  437269 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1014 19:40:33.082178  437269 command_runner.go:130] > # 	"image_pulls_success_total",
	I1014 19:40:33.082182  437269 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1014 19:40:33.082188  437269 command_runner.go:130] > # 	"containers_oom_count_total",
	I1014 19:40:33.082193  437269 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1014 19:40:33.082199  437269 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1014 19:40:33.082203  437269 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1014 19:40:33.082208  437269 command_runner.go:130] > # ]
	I1014 19:40:33.082214  437269 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1014 19:40:33.082219  437269 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1014 19:40:33.082224  437269 command_runner.go:130] > # The port on which the metrics server will listen.
	I1014 19:40:33.082227  437269 command_runner.go:130] > # metrics_port = 9090
	I1014 19:40:33.082234  437269 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1014 19:40:33.082238  437269 command_runner.go:130] > # metrics_socket = ""
	I1014 19:40:33.082245  437269 command_runner.go:130] > # The certificate for the secure metrics server.
	I1014 19:40:33.082250  437269 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1014 19:40:33.082258  437269 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1014 19:40:33.082263  437269 command_runner.go:130] > # certificate on any modification event.
	I1014 19:40:33.082269  437269 command_runner.go:130] > # metrics_cert = ""
	I1014 19:40:33.082274  437269 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1014 19:40:33.082280  437269 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1014 19:40:33.082284  437269 command_runner.go:130] > # metrics_key = ""
	I1014 19:40:33.082292  437269 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1014 19:40:33.082295  437269 command_runner.go:130] > [crio.tracing]
	I1014 19:40:33.082300  437269 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1014 19:40:33.082306  437269 command_runner.go:130] > # enable_tracing = false
	I1014 19:40:33.082311  437269 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1014 19:40:33.082317  437269 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1014 19:40:33.082324  437269 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1014 19:40:33.082330  437269 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1014 19:40:33.082334  437269 command_runner.go:130] > # CRI-O NRI configuration.
	I1014 19:40:33.082340  437269 command_runner.go:130] > [crio.nri]
	I1014 19:40:33.082345  437269 command_runner.go:130] > # Globally enable or disable NRI.
	I1014 19:40:33.082350  437269 command_runner.go:130] > # enable_nri = true
	I1014 19:40:33.082354  437269 command_runner.go:130] > # NRI socket to listen on.
	I1014 19:40:33.082361  437269 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1014 19:40:33.082365  437269 command_runner.go:130] > # NRI plugin directory to use.
	I1014 19:40:33.082372  437269 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1014 19:40:33.082376  437269 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1014 19:40:33.082383  437269 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1014 19:40:33.082388  437269 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1014 19:40:33.082423  437269 command_runner.go:130] > # nri_disable_connections = false
	I1014 19:40:33.082431  437269 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1014 19:40:33.082435  437269 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1014 19:40:33.082440  437269 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1014 19:40:33.082444  437269 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1014 19:40:33.082451  437269 command_runner.go:130] > # NRI default validator configuration.
	I1014 19:40:33.082457  437269 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1014 19:40:33.082466  437269 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1014 19:40:33.082472  437269 command_runner.go:130] > # can be restricted/rejected:
	I1014 19:40:33.082476  437269 command_runner.go:130] > # - OCI hook injection
	I1014 19:40:33.082483  437269 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1014 19:40:33.082487  437269 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1014 19:40:33.082494  437269 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1014 19:40:33.082498  437269 command_runner.go:130] > # - adjustment of linux namespaces
	I1014 19:40:33.082506  437269 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1014 19:40:33.082514  437269 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1014 19:40:33.082519  437269 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1014 19:40:33.082524  437269 command_runner.go:130] > #
	I1014 19:40:33.082528  437269 command_runner.go:130] > # [crio.nri.default_validator]
	I1014 19:40:33.082535  437269 command_runner.go:130] > # nri_enable_default_validator = false
	I1014 19:40:33.082539  437269 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1014 19:40:33.082546  437269 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1014 19:40:33.082551  437269 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1014 19:40:33.082559  437269 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1014 19:40:33.082564  437269 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1014 19:40:33.082570  437269 command_runner.go:130] > # nri_validator_required_plugins = [
	I1014 19:40:33.082573  437269 command_runner.go:130] > # ]
	I1014 19:40:33.082582  437269 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1014 19:40:33.082587  437269 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1014 19:40:33.082593  437269 command_runner.go:130] > [crio.stats]
	I1014 19:40:33.082598  437269 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1014 19:40:33.082608  437269 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1014 19:40:33.082614  437269 command_runner.go:130] > # stats_collection_period = 0
	I1014 19:40:33.082619  437269 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1014 19:40:33.082628  437269 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1014 19:40:33.082631  437269 command_runner.go:130] > # collection_period = 0
	I1014 19:40:33.082741  437269 cni.go:84] Creating CNI manager for ""
	I1014 19:40:33.082769  437269 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1014 19:40:33.082789  437269 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1014 19:40:33.082811  437269 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-744288 NodeName:functional-744288 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1014 19:40:33.082940  437269 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-744288"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 19:40:33.083002  437269 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1014 19:40:33.091321  437269 command_runner.go:130] > kubeadm
	I1014 19:40:33.091339  437269 command_runner.go:130] > kubectl
	I1014 19:40:33.091351  437269 command_runner.go:130] > kubelet
	I1014 19:40:33.091376  437269 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 19:40:33.091429  437269 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1014 19:40:33.099086  437269 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1014 19:40:33.111962  437269 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 19:40:33.125422  437269 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1014 19:40:33.138383  437269 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1014 19:40:33.142436  437269 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1014 19:40:33.142515  437269 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 19:40:33.229714  437269 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 19:40:33.242948  437269 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288 for IP: 192.168.49.2
	I1014 19:40:33.242967  437269 certs.go:195] generating shared ca certs ...
	I1014 19:40:33.242983  437269 certs.go:227] acquiring lock for ca certs: {Name:mk332cc83f1e1747534fc81e896bed94f24941d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 19:40:33.243111  437269 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.key
	I1014 19:40:33.243147  437269 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.key
	I1014 19:40:33.243157  437269 certs.go:257] generating profile certs ...
	I1014 19:40:33.243244  437269 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/client.key
	I1014 19:40:33.243295  437269 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/apiserver.key.d065d9e2
	I1014 19:40:33.243331  437269 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/proxy-client.key
	I1014 19:40:33.243342  437269 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1014 19:40:33.243354  437269 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1014 19:40:33.243366  437269 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1014 19:40:33.243378  437269 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1014 19:40:33.243389  437269 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1014 19:40:33.243402  437269 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1014 19:40:33.243414  437269 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1014 19:40:33.243426  437269 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1014 19:40:33.243468  437269 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/417373.pem (1338 bytes)
	W1014 19:40:33.243499  437269 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-413763/.minikube/certs/417373_empty.pem, impossibly tiny 0 bytes
	I1014 19:40:33.243509  437269 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca-key.pem (1675 bytes)
	I1014 19:40:33.243528  437269 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem (1078 bytes)
	I1014 19:40:33.243550  437269 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem (1123 bytes)
	I1014 19:40:33.243570  437269 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem (1675 bytes)
	I1014 19:40:33.243605  437269 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem (1708 bytes)
	I1014 19:40:33.243631  437269 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1014 19:40:33.243646  437269 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/417373.pem -> /usr/share/ca-certificates/417373.pem
	I1014 19:40:33.243657  437269 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem -> /usr/share/ca-certificates/4173732.pem
	I1014 19:40:33.244241  437269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 19:40:33.262628  437269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1014 19:40:33.280949  437269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 19:40:33.299645  437269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1014 19:40:33.318581  437269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1014 19:40:33.336772  437269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1014 19:40:33.354893  437269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 19:40:33.372224  437269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1014 19:40:33.389816  437269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 19:40:33.407785  437269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/certs/417373.pem --> /usr/share/ca-certificates/417373.pem (1338 bytes)
	I1014 19:40:33.425006  437269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem --> /usr/share/ca-certificates/4173732.pem (1708 bytes)
	I1014 19:40:33.442414  437269 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 19:40:33.455418  437269 ssh_runner.go:195] Run: openssl version
	I1014 19:40:33.461786  437269 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1014 19:40:33.461878  437269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/417373.pem && ln -fs /usr/share/ca-certificates/417373.pem /etc/ssl/certs/417373.pem"
	I1014 19:40:33.470707  437269 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/417373.pem
	I1014 19:40:33.474930  437269 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct 14 19:32 /usr/share/ca-certificates/417373.pem
	I1014 19:40:33.474991  437269 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 19:32 /usr/share/ca-certificates/417373.pem
	I1014 19:40:33.475040  437269 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/417373.pem
	I1014 19:40:33.510084  437269 command_runner.go:130] > 51391683
	I1014 19:40:33.510386  437269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/417373.pem /etc/ssl/certs/51391683.0"
	I1014 19:40:33.519147  437269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4173732.pem && ln -fs /usr/share/ca-certificates/4173732.pem /etc/ssl/certs/4173732.pem"
	I1014 19:40:33.528110  437269 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4173732.pem
	I1014 19:40:33.532126  437269 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct 14 19:32 /usr/share/ca-certificates/4173732.pem
	I1014 19:40:33.532195  437269 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 19:32 /usr/share/ca-certificates/4173732.pem
	I1014 19:40:33.532237  437269 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4173732.pem
	I1014 19:40:33.566452  437269 command_runner.go:130] > 3ec20f2e
	I1014 19:40:33.566529  437269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4173732.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 19:40:33.575059  437269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 19:40:33.583998  437269 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 19:40:33.587961  437269 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct 14 19:15 /usr/share/ca-certificates/minikubeCA.pem
	I1014 19:40:33.588033  437269 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 19:15 /usr/share/ca-certificates/minikubeCA.pem
	I1014 19:40:33.588081  437269 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 19:40:33.622398  437269 command_runner.go:130] > b5213941
	I1014 19:40:33.622796  437269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 19:40:33.631371  437269 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 19:40:33.635295  437269 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 19:40:33.635320  437269 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1014 19:40:33.635326  437269 command_runner.go:130] > Device: 8,1	Inode: 573968      Links: 1
	I1014 19:40:33.635332  437269 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1014 19:40:33.635341  437269 command_runner.go:130] > Access: 2025-10-14 19:36:24.950222095 +0000
	I1014 19:40:33.635346  437269 command_runner.go:130] > Modify: 2025-10-14 19:32:20.041123235 +0000
	I1014 19:40:33.635350  437269 command_runner.go:130] > Change: 2025-10-14 19:32:20.041123235 +0000
	I1014 19:40:33.635355  437269 command_runner.go:130] >  Birth: 2025-10-14 19:32:20.041123235 +0000
	I1014 19:40:33.635409  437269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1014 19:40:33.669731  437269 command_runner.go:130] > Certificate will not expire
	I1014 19:40:33.670080  437269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1014 19:40:33.705048  437269 command_runner.go:130] > Certificate will not expire
	I1014 19:40:33.705140  437269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1014 19:40:33.739547  437269 command_runner.go:130] > Certificate will not expire
	I1014 19:40:33.739632  437269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1014 19:40:33.774590  437269 command_runner.go:130] > Certificate will not expire
	I1014 19:40:33.774998  437269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1014 19:40:33.810800  437269 command_runner.go:130] > Certificate will not expire
	I1014 19:40:33.810892  437269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1014 19:40:33.846191  437269 command_runner.go:130] > Certificate will not expire
	I1014 19:40:33.846525  437269 kubeadm.go:400] StartCluster: {Name:functional-744288 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-744288 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 19:40:33.846626  437269 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 19:40:33.846701  437269 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 19:40:33.876026  437269 cri.go:89] found id: ""
	I1014 19:40:33.876095  437269 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 19:40:33.883772  437269 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1014 19:40:33.883800  437269 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1014 19:40:33.883806  437269 command_runner.go:130] > /var/lib/minikube/etcd:
	I1014 19:40:33.884383  437269 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1014 19:40:33.884404  437269 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1014 19:40:33.884457  437269 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1014 19:40:33.892144  437269 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1014 19:40:33.892232  437269 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-744288" does not appear in /home/jenkins/minikube-integration/21409-413763/kubeconfig
	I1014 19:40:33.892262  437269 kubeconfig.go:62] /home/jenkins/minikube-integration/21409-413763/kubeconfig needs updating (will repair): [kubeconfig missing "functional-744288" cluster setting kubeconfig missing "functional-744288" context setting]
	I1014 19:40:33.892554  437269 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/kubeconfig: {Name:mk8f52e50cf00a1f90024c40a36e49753857e33b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 19:40:33.893171  437269 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21409-413763/kubeconfig
	I1014 19:40:33.893322  437269 kapi.go:59] client config for functional-744288: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/client.crt", KeyFile:"/home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/client.key", CAFile:"/home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1014 19:40:33.893776  437269 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1014 19:40:33.893798  437269 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1014 19:40:33.893803  437269 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1014 19:40:33.893807  437269 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1014 19:40:33.893810  437269 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1014 19:40:33.893821  437269 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1014 19:40:33.894261  437269 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1014 19:40:33.902475  437269 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1014 19:40:33.902513  437269 kubeadm.go:601] duration metric: took 18.102158ms to restartPrimaryControlPlane
	I1014 19:40:33.902527  437269 kubeadm.go:402] duration metric: took 56.015342ms to StartCluster
	I1014 19:40:33.902549  437269 settings.go:142] acquiring lock: {Name:mke99e63954bc0385c76f9fa1a80091fa7740a6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 19:40:33.902670  437269 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21409-413763/kubeconfig
	I1014 19:40:33.903326  437269 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/kubeconfig: {Name:mk8f52e50cf00a1f90024c40a36e49753857e33b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 19:40:33.903559  437269 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 19:40:33.903636  437269 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1014 19:40:33.903763  437269 addons.go:69] Setting storage-provisioner=true in profile "functional-744288"
	I1014 19:40:33.903782  437269 config.go:182] Loaded profile config "functional-744288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 19:40:33.903793  437269 addons.go:69] Setting default-storageclass=true in profile "functional-744288"
	I1014 19:40:33.903791  437269 addons.go:238] Setting addon storage-provisioner=true in "functional-744288"
	I1014 19:40:33.903828  437269 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-744288"
	I1014 19:40:33.903863  437269 host.go:66] Checking if "functional-744288" exists ...
	I1014 19:40:33.904105  437269 cli_runner.go:164] Run: docker container inspect functional-744288 --format={{.State.Status}}
	I1014 19:40:33.904258  437269 cli_runner.go:164] Run: docker container inspect functional-744288 --format={{.State.Status}}
	I1014 19:40:33.906507  437269 out.go:179] * Verifying Kubernetes components...
	I1014 19:40:33.907562  437269 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 19:40:33.925699  437269 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21409-413763/kubeconfig
	I1014 19:40:33.925934  437269 kapi.go:59] client config for functional-744288: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/client.crt", KeyFile:"/home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/client.key", CAFile:"/home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1014 19:40:33.926358  437269 addons.go:238] Setting addon default-storageclass=true in "functional-744288"
	I1014 19:40:33.926409  437269 host.go:66] Checking if "functional-744288" exists ...
	I1014 19:40:33.926937  437269 cli_runner.go:164] Run: docker container inspect functional-744288 --format={{.State.Status}}
	I1014 19:40:33.928366  437269 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 19:40:33.930195  437269 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 19:40:33.930216  437269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1014 19:40:33.930272  437269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-744288
	I1014 19:40:33.952215  437269 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1014 19:40:33.952244  437269 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1014 19:40:33.952310  437269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-744288
	I1014 19:40:33.956857  437269 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/functional-744288/id_rsa Username:docker}
	I1014 19:40:33.971706  437269 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/functional-744288/id_rsa Username:docker}
	I1014 19:40:34.006948  437269 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 19:40:34.021044  437269 node_ready.go:35] waiting up to 6m0s for node "functional-744288" to be "Ready" ...
	I1014 19:40:34.021181  437269 type.go:168] "Request Body" body=""
	I1014 19:40:34.021246  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:34.021571  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:34.069169  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 19:40:34.082461  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1014 19:40:34.132558  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:40:34.132646  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:34.132686  437269 retry.go:31] will retry after 329.296623ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:34.141809  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:40:34.144515  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:34.144547  437269 retry.go:31] will retry after 261.501781ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:34.407171  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1014 19:40:34.461386  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:40:34.461450  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:34.461492  437269 retry.go:31] will retry after 293.495478ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:34.462464  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 19:40:34.513733  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:40:34.516544  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:34.516582  437269 retry.go:31] will retry after 480.429339ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:34.521783  437269 type.go:168] "Request Body" body=""
	I1014 19:40:34.521866  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:34.522176  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:34.755667  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1014 19:40:34.810676  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:40:34.810724  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:34.810744  437269 retry.go:31] will retry after 614.479011ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:34.998090  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 19:40:35.021962  437269 type.go:168] "Request Body" body=""
	I1014 19:40:35.022038  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:35.022373  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:35.049799  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:40:35.052676  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:35.052709  437269 retry.go:31] will retry after 432.01436ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:35.426352  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1014 19:40:35.482403  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:40:35.482455  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:35.482485  437269 retry.go:31] will retry after 1.057612851s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:35.485602  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 19:40:35.522076  437269 type.go:168] "Request Body" body=""
	I1014 19:40:35.522160  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:35.522499  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:35.537729  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:40:35.540612  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:35.540651  437269 retry.go:31] will retry after 1.151923723s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:36.021224  437269 type.go:168] "Request Body" body=""
	I1014 19:40:36.021306  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:36.021677  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:40:36.021751  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:40:36.521540  437269 type.go:168] "Request Body" body=""
	I1014 19:40:36.521648  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:36.522005  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:36.541250  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1014 19:40:36.596277  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:40:36.596343  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:36.596366  437269 retry.go:31] will retry after 858.341252ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:36.693590  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 19:40:36.746070  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:40:36.749114  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:36.749145  437269 retry.go:31] will retry after 1.225575657s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:37.021547  437269 type.go:168] "Request Body" body=""
	I1014 19:40:37.021641  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:37.022054  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:37.455821  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1014 19:40:37.511587  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:40:37.511647  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:37.511676  437269 retry.go:31] will retry after 1.002490371s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:37.521830  437269 type.go:168] "Request Body" body=""
	I1014 19:40:37.521912  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:37.522269  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:37.974939  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 19:40:38.021626  437269 type.go:168] "Request Body" body=""
	I1014 19:40:38.021748  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:38.022116  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:40:38.022184  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:40:38.027734  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:40:38.030470  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:38.030507  437269 retry.go:31] will retry after 1.025461199s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:38.515193  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1014 19:40:38.521814  437269 type.go:168] "Request Body" body=""
	I1014 19:40:38.521914  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:38.522290  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:38.567735  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:40:38.570434  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:38.570473  437269 retry.go:31] will retry after 1.83061983s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:39.022158  437269 type.go:168] "Request Body" body=""
	I1014 19:40:39.022254  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:39.022656  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:39.056879  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 19:40:39.109896  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:40:39.112847  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:39.112884  437269 retry.go:31] will retry after 3.104822489s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:39.521355  437269 type.go:168] "Request Body" body=""
	I1014 19:40:39.521439  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:39.521856  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:40.021689  437269 type.go:168] "Request Body" body=""
	I1014 19:40:40.021785  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:40.022244  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:40:40.022320  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:40:40.401833  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1014 19:40:40.453343  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:40:40.456347  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:40.456387  437269 retry.go:31] will retry after 3.646877865s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:40.521651  437269 type.go:168] "Request Body" body=""
	I1014 19:40:40.521728  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:40.522111  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:41.021801  437269 type.go:168] "Request Body" body=""
	I1014 19:40:41.021897  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:41.022239  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:41.521918  437269 type.go:168] "Request Body" body=""
	I1014 19:40:41.522016  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:41.522380  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:42.022132  437269 type.go:168] "Request Body" body=""
	I1014 19:40:42.022218  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:42.022586  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:40:42.022649  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:40:42.217895  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 19:40:42.273119  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:40:42.273178  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:42.273199  437269 retry.go:31] will retry after 5.13792128s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:42.521564  437269 type.go:168] "Request Body" body=""
	I1014 19:40:42.521722  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:42.522122  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:43.022026  437269 type.go:168] "Request Body" body=""
	I1014 19:40:43.022112  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:43.022464  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:43.521291  437269 type.go:168] "Request Body" body=""
	I1014 19:40:43.521385  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:43.521849  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:44.021813  437269 type.go:168] "Request Body" body=""
	I1014 19:40:44.021907  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:44.022272  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:44.103502  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1014 19:40:44.156724  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:40:44.159470  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:44.159502  437269 retry.go:31] will retry after 6.372961743s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:44.522197  437269 type.go:168] "Request Body" body=""
	I1014 19:40:44.522316  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:44.522799  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:40:44.522878  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:40:45.021683  437269 type.go:168] "Request Body" body=""
	I1014 19:40:45.021776  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:45.022120  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:45.521709  437269 type.go:168] "Request Body" body=""
	I1014 19:40:45.521833  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:45.522209  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:46.021967  437269 type.go:168] "Request Body" body=""
	I1014 19:40:46.022064  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:46.022441  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:46.522085  437269 type.go:168] "Request Body" body=""
	I1014 19:40:46.522181  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:46.522556  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:47.022210  437269 type.go:168] "Request Body" body=""
	I1014 19:40:47.022296  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:47.022645  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:40:47.022716  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:40:47.412207  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 19:40:47.466705  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:40:47.466772  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:47.466800  437269 retry.go:31] will retry after 6.31356698s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:47.521972  437269 type.go:168] "Request Body" body=""
	I1014 19:40:47.522061  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:47.522426  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:48.022131  437269 type.go:168] "Request Body" body=""
	I1014 19:40:48.022208  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:48.022593  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:48.522267  437269 type.go:168] "Request Body" body=""
	I1014 19:40:48.522351  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:48.522727  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:49.021317  437269 type.go:168] "Request Body" body=""
	I1014 19:40:49.021410  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:49.021831  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:49.521375  437269 type.go:168] "Request Body" body=""
	I1014 19:40:49.521474  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:49.521884  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:40:49.521959  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:40:50.021803  437269 type.go:168] "Request Body" body=""
	I1014 19:40:50.021896  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:50.022319  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:50.521972  437269 type.go:168] "Request Body" body=""
	I1014 19:40:50.522068  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:50.522461  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:50.533648  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1014 19:40:50.590568  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:40:50.590621  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:50.590649  437269 retry.go:31] will retry after 8.10133009s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:51.022238  437269 type.go:168] "Request Body" body=""
	I1014 19:40:51.022324  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:51.022671  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:51.521259  437269 type.go:168] "Request Body" body=""
	I1014 19:40:51.521354  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:51.521737  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:52.021339  437269 type.go:168] "Request Body" body=""
	I1014 19:40:52.021436  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:52.021838  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:40:52.021911  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:40:52.521431  437269 type.go:168] "Request Body" body=""
	I1014 19:40:52.521523  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:52.521914  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:53.021515  437269 type.go:168] "Request Body" body=""
	I1014 19:40:53.021632  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:53.022015  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:53.521582  437269 type.go:168] "Request Body" body=""
	I1014 19:40:53.521689  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:53.522061  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:53.781554  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 19:40:53.838039  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:40:53.838101  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:53.838128  437269 retry.go:31] will retry after 9.837531091s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:54.021666  437269 type.go:168] "Request Body" body=""
	I1014 19:40:54.021771  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:54.022166  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:40:54.022235  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:40:54.521778  437269 type.go:168] "Request Body" body=""
	I1014 19:40:54.521864  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:54.522222  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:55.022074  437269 type.go:168] "Request Body" body=""
	I1014 19:40:55.022163  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:55.022522  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:55.522140  437269 type.go:168] "Request Body" body=""
	I1014 19:40:55.522219  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:55.522653  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:56.021265  437269 type.go:168] "Request Body" body=""
	I1014 19:40:56.021344  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:56.021726  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:56.521342  437269 type.go:168] "Request Body" body=""
	I1014 19:40:56.521439  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:56.521872  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:40:56.521945  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:40:57.021424  437269 type.go:168] "Request Body" body=""
	I1014 19:40:57.021552  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:57.021974  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:57.521651  437269 type.go:168] "Request Body" body=""
	I1014 19:40:57.521797  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:57.522216  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:58.021903  437269 type.go:168] "Request Body" body=""
	I1014 19:40:58.021987  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:58.022398  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:58.522085  437269 type.go:168] "Request Body" body=""
	I1014 19:40:58.522169  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:58.522556  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:40:58.522630  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:40:58.692921  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1014 19:40:58.746193  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:40:58.749262  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:58.749295  437269 retry.go:31] will retry after 17.735335575s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:59.021769  437269 type.go:168] "Request Body" body=""
	I1014 19:40:59.021862  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:59.022229  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:59.521888  437269 type.go:168] "Request Body" body=""
	I1014 19:40:59.522001  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:59.522349  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:00.021702  437269 type.go:168] "Request Body" body=""
	I1014 19:41:00.021801  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:00.022202  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:00.522173  437269 type.go:168] "Request Body" body=""
	I1014 19:41:00.522273  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:00.522632  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:00.522721  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:01.021455  437269 type.go:168] "Request Body" body=""
	I1014 19:41:01.021548  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:01.021937  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:01.521738  437269 type.go:168] "Request Body" body=""
	I1014 19:41:01.521858  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:01.522279  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:02.022194  437269 type.go:168] "Request Body" body=""
	I1014 19:41:02.022289  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:02.022725  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:02.521517  437269 type.go:168] "Request Body" body=""
	I1014 19:41:02.521656  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:02.522050  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:03.021919  437269 type.go:168] "Request Body" body=""
	I1014 19:41:03.022009  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:03.022403  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:03.022475  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:03.522212  437269 type.go:168] "Request Body" body=""
	I1014 19:41:03.522291  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:03.522659  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:03.675962  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 19:41:03.727887  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:41:03.730521  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:41:03.730562  437269 retry.go:31] will retry after 19.438885547s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:41:04.022253  437269 type.go:168] "Request Body" body=""
	I1014 19:41:04.022379  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:04.022809  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:04.521663  437269 type.go:168] "Request Body" body=""
	I1014 19:41:04.521794  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:04.522180  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:05.021978  437269 type.go:168] "Request Body" body=""
	I1014 19:41:05.022063  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:05.022412  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:05.522231  437269 type.go:168] "Request Body" body=""
	I1014 19:41:05.522314  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:05.522655  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:05.522732  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:06.021349  437269 type.go:168] "Request Body" body=""
	I1014 19:41:06.021429  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:06.021828  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:06.521569  437269 type.go:168] "Request Body" body=""
	I1014 19:41:06.521651  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:06.522040  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:07.021907  437269 type.go:168] "Request Body" body=""
	I1014 19:41:07.021993  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:07.022361  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:07.522243  437269 type.go:168] "Request Body" body=""
	I1014 19:41:07.522333  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:07.522720  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:07.522816  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:08.021308  437269 type.go:168] "Request Body" body=""
	I1014 19:41:08.021386  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:08.021750  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:08.521638  437269 type.go:168] "Request Body" body=""
	I1014 19:41:08.521722  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:08.522125  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:09.021981  437269 type.go:168] "Request Body" body=""
	I1014 19:41:09.022069  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:09.022464  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:09.521240  437269 type.go:168] "Request Body" body=""
	I1014 19:41:09.521389  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:09.521793  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:10.021609  437269 type.go:168] "Request Body" body=""
	I1014 19:41:10.021695  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:10.022108  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:10.022177  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:10.522050  437269 type.go:168] "Request Body" body=""
	I1014 19:41:10.522140  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:10.522549  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:11.021354  437269 type.go:168] "Request Body" body=""
	I1014 19:41:11.021435  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:11.021862  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:11.521641  437269 type.go:168] "Request Body" body=""
	I1014 19:41:11.521740  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:11.522168  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:12.022028  437269 type.go:168] "Request Body" body=""
	I1014 19:41:12.022114  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:12.022483  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:12.022549  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:12.521254  437269 type.go:168] "Request Body" body=""
	I1014 19:41:12.521342  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:12.521740  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:13.021557  437269 type.go:168] "Request Body" body=""
	I1014 19:41:13.021642  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:13.022039  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:13.521864  437269 type.go:168] "Request Body" body=""
	I1014 19:41:13.521953  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:13.522323  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:14.022194  437269 type.go:168] "Request Body" body=""
	I1014 19:41:14.022287  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:14.022654  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:14.022724  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:14.521434  437269 type.go:168] "Request Body" body=""
	I1014 19:41:14.521526  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:14.521992  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:15.021751  437269 type.go:168] "Request Body" body=""
	I1014 19:41:15.021849  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:15.022211  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:15.522050  437269 type.go:168] "Request Body" body=""
	I1014 19:41:15.522133  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:15.522522  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:16.021287  437269 type.go:168] "Request Body" body=""
	I1014 19:41:16.021373  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:16.021781  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:16.485413  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1014 19:41:16.522201  437269 type.go:168] "Request Body" body=""
	I1014 19:41:16.522279  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:16.522623  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:16.522694  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:16.537285  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:41:16.540211  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:41:16.540239  437269 retry.go:31] will retry after 23.522391633s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:41:17.021909  437269 type.go:168] "Request Body" body=""
	I1014 19:41:17.022015  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:17.022407  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:17.522283  437269 type.go:168] "Request Body" body=""
	I1014 19:41:17.522380  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:17.522743  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:18.021576  437269 type.go:168] "Request Body" body=""
	I1014 19:41:18.021671  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:18.022118  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:18.522003  437269 type.go:168] "Request Body" body=""
	I1014 19:41:18.522089  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:18.522516  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:19.021291  437269 type.go:168] "Request Body" body=""
	I1014 19:41:19.021372  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:19.021747  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:19.021855  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:19.521591  437269 type.go:168] "Request Body" body=""
	I1014 19:41:19.521674  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:19.522078  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:20.021898  437269 type.go:168] "Request Body" body=""
	I1014 19:41:20.021987  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:20.022480  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:20.521321  437269 type.go:168] "Request Body" body=""
	I1014 19:41:20.521403  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:20.521841  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:21.021619  437269 type.go:168] "Request Body" body=""
	I1014 19:41:21.021721  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:21.022173  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:21.022242  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:21.522084  437269 type.go:168] "Request Body" body=""
	I1014 19:41:21.522176  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:21.522550  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:22.021344  437269 type.go:168] "Request Body" body=""
	I1014 19:41:22.021423  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:22.021877  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:22.521680  437269 type.go:168] "Request Body" body=""
	I1014 19:41:22.521784  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:22.522158  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:23.022009  437269 type.go:168] "Request Body" body=""
	I1014 19:41:23.022088  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:23.022489  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:23.022557  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:23.169796  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 19:41:23.227015  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:41:23.227096  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:41:23.227121  437269 retry.go:31] will retry after 24.705053737s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:41:23.521443  437269 type.go:168] "Request Body" body=""
	I1014 19:41:23.521533  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:23.522057  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:24.021980  437269 type.go:168] "Request Body" body=""
	I1014 19:41:24.022087  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:24.022457  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:24.522136  437269 type.go:168] "Request Body" body=""
	I1014 19:41:24.522235  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:24.522578  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:25.021598  437269 type.go:168] "Request Body" body=""
	I1014 19:41:25.021741  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:25.022116  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:25.521746  437269 type.go:168] "Request Body" body=""
	I1014 19:41:25.521865  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:25.522300  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:25.522363  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:26.021980  437269 type.go:168] "Request Body" body=""
	I1014 19:41:26.022056  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:26.022462  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:26.522116  437269 type.go:168] "Request Body" body=""
	I1014 19:41:26.522205  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:26.522581  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:27.022289  437269 type.go:168] "Request Body" body=""
	I1014 19:41:27.022379  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:27.022735  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:27.521368  437269 type.go:168] "Request Body" body=""
	I1014 19:41:27.521454  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:27.521879  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:28.021445  437269 type.go:168] "Request Body" body=""
	I1014 19:41:28.021545  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:28.021931  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:28.021996  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:28.521541  437269 type.go:168] "Request Body" body=""
	I1014 19:41:28.521630  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:28.522060  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:29.021664  437269 type.go:168] "Request Body" body=""
	I1014 19:41:29.021774  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:29.022227  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:29.521894  437269 type.go:168] "Request Body" body=""
	I1014 19:41:29.521983  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:29.522351  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:30.022245  437269 type.go:168] "Request Body" body=""
	I1014 19:41:30.022327  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:30.022707  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:30.022824  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:30.521424  437269 type.go:168] "Request Body" body=""
	I1014 19:41:30.521529  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:30.521982  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:31.021342  437269 type.go:168] "Request Body" body=""
	I1014 19:41:31.021429  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:31.021899  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:31.521503  437269 type.go:168] "Request Body" body=""
	I1014 19:41:31.521595  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:31.522014  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:32.021616  437269 type.go:168] "Request Body" body=""
	I1014 19:41:32.021705  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:32.022095  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:32.521679  437269 type.go:168] "Request Body" body=""
	I1014 19:41:32.521783  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:32.522156  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:32.522231  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:33.021778  437269 type.go:168] "Request Body" body=""
	I1014 19:41:33.021859  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:33.022214  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:33.521935  437269 type.go:168] "Request Body" body=""
	I1014 19:41:33.522024  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:33.522446  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:34.021233  437269 type.go:168] "Request Body" body=""
	I1014 19:41:34.021316  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:34.021702  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:34.521364  437269 type.go:168] "Request Body" body=""
	I1014 19:41:34.521444  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:34.521880  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:35.021696  437269 type.go:168] "Request Body" body=""
	I1014 19:41:35.021799  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:35.022177  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:35.022244  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:35.521929  437269 type.go:168] "Request Body" body=""
	I1014 19:41:35.522017  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:35.522385  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:36.022241  437269 type.go:168] "Request Body" body=""
	I1014 19:41:36.022330  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:36.022808  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:36.521609  437269 type.go:168] "Request Body" body=""
	I1014 19:41:36.521699  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:36.522099  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:37.021877  437269 type.go:168] "Request Body" body=""
	I1014 19:41:37.021957  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:37.022344  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:37.022414  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:37.522189  437269 type.go:168] "Request Body" body=""
	I1014 19:41:37.522279  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:37.522617  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:38.021362  437269 type.go:168] "Request Body" body=""
	I1014 19:41:38.021440  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:38.021855  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:38.521628  437269 type.go:168] "Request Body" body=""
	I1014 19:41:38.521722  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:38.522097  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:39.021917  437269 type.go:168] "Request Body" body=""
	I1014 19:41:39.022012  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:39.022384  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:39.022447  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:39.522314  437269 type.go:168] "Request Body" body=""
	I1014 19:41:39.522401  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:39.522788  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:40.021745  437269 type.go:168] "Request Body" body=""
	I1014 19:41:40.021857  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:40.022236  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:40.063502  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1014 19:41:40.119488  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:41:40.119566  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:41:40.119604  437269 retry.go:31] will retry after 34.554126144s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:41:40.522218  437269 type.go:168] "Request Body" body=""
	I1014 19:41:40.522383  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:40.522878  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:41.021513  437269 type.go:168] "Request Body" body=""
	I1014 19:41:41.021597  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:41.021974  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:41.521785  437269 type.go:168] "Request Body" body=""
	I1014 19:41:41.521864  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:41.522250  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:41.522330  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:42.022203  437269 type.go:168] "Request Body" body=""
	I1014 19:41:42.022322  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:42.022810  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:42.521587  437269 type.go:168] "Request Body" body=""
	I1014 19:41:42.521669  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:42.522059  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:43.021981  437269 type.go:168] "Request Body" body=""
	I1014 19:41:43.022074  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:43.022442  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:43.521224  437269 type.go:168] "Request Body" body=""
	I1014 19:41:43.521304  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:43.521705  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:44.021370  437269 type.go:168] "Request Body" body=""
	I1014 19:41:44.021454  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:44.021888  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:44.021956  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:44.521703  437269 type.go:168] "Request Body" body=""
	I1014 19:41:44.521821  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:44.522229  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:45.022076  437269 type.go:168] "Request Body" body=""
	I1014 19:41:45.022158  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:45.022500  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:45.521283  437269 type.go:168] "Request Body" body=""
	I1014 19:41:45.521372  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:45.521787  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:46.021585  437269 type.go:168] "Request Body" body=""
	I1014 19:41:46.021687  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:46.022067  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:46.022144  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:46.521959  437269 type.go:168] "Request Body" body=""
	I1014 19:41:46.522047  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:46.522400  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:47.022244  437269 type.go:168] "Request Body" body=""
	I1014 19:41:47.022326  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:47.022720  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:47.521502  437269 type.go:168] "Request Body" body=""
	I1014 19:41:47.521586  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:47.521971  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:47.932453  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 19:41:47.984361  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:41:47.987254  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:41:47.987292  437269 retry.go:31] will retry after 37.673790461s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:41:48.021563  437269 type.go:168] "Request Body" body=""
	I1014 19:41:48.021661  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:48.022072  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:48.521661  437269 type.go:168] "Request Body" body=""
	I1014 19:41:48.521746  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:48.522153  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:48.522222  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:49.021778  437269 type.go:168] "Request Body" body=""
	I1014 19:41:49.021869  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:49.022246  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:49.521919  437269 type.go:168] "Request Body" body=""
	I1014 19:41:49.521999  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:49.522366  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:50.021911  437269 type.go:168] "Request Body" body=""
	I1014 19:41:50.021996  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:50.022358  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:50.522021  437269 type.go:168] "Request Body" body=""
	I1014 19:41:50.522121  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:50.522513  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:50.522647  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:51.022257  437269 type.go:168] "Request Body" body=""
	I1014 19:41:51.022355  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:51.022711  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:51.521301  437269 type.go:168] "Request Body" body=""
	I1014 19:41:51.521377  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:51.521820  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:52.021365  437269 type.go:168] "Request Body" body=""
	I1014 19:41:52.021447  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:52.021844  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:52.521373  437269 type.go:168] "Request Body" body=""
	I1014 19:41:52.521451  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:52.521825  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:53.021413  437269 type.go:168] "Request Body" body=""
	I1014 19:41:53.021513  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:53.021940  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:53.022029  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:53.521560  437269 type.go:168] "Request Body" body=""
	I1014 19:41:53.521663  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:53.522072  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:54.021872  437269 type.go:168] "Request Body" body=""
	I1014 19:41:54.021964  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:54.022312  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:54.521983  437269 type.go:168] "Request Body" body=""
	I1014 19:41:54.522067  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:54.522484  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:55.021263  437269 type.go:168] "Request Body" body=""
	I1014 19:41:55.021357  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:55.021747  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:55.521288  437269 type.go:168] "Request Body" body=""
	I1014 19:41:55.521376  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:55.521739  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:55.521840  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:56.021322  437269 type.go:168] "Request Body" body=""
	I1014 19:41:56.021409  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:56.021840  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:56.521370  437269 type.go:168] "Request Body" body=""
	I1014 19:41:56.521452  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:56.521831  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:57.021963  437269 type.go:168] "Request Body" body=""
	I1014 19:41:57.022041  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:57.022397  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:57.522061  437269 type.go:168] "Request Body" body=""
	I1014 19:41:57.522137  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:57.522480  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:57.522553  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:58.022151  437269 type.go:168] "Request Body" body=""
	I1014 19:41:58.022236  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:58.022597  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:58.522240  437269 type.go:168] "Request Body" body=""
	I1014 19:41:58.522322  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:58.522668  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:59.021247  437269 type.go:168] "Request Body" body=""
	I1014 19:41:59.021333  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:59.021717  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:59.521251  437269 type.go:168] "Request Body" body=""
	I1014 19:41:59.521330  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:59.521703  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:00.021653  437269 type.go:168] "Request Body" body=""
	I1014 19:42:00.021752  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:00.022142  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:00.022220  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:00.522036  437269 type.go:168] "Request Body" body=""
	I1014 19:42:00.522123  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:00.522466  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:01.022199  437269 type.go:168] "Request Body" body=""
	I1014 19:42:01.022290  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:01.022633  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:01.521196  437269 type.go:168] "Request Body" body=""
	I1014 19:42:01.521278  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:01.521637  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:02.022258  437269 type.go:168] "Request Body" body=""
	I1014 19:42:02.022335  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:02.022740  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:02.022848  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:02.521321  437269 type.go:168] "Request Body" body=""
	I1014 19:42:02.521405  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:02.521800  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:03.021313  437269 type.go:168] "Request Body" body=""
	I1014 19:42:03.021392  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:03.021749  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:03.521348  437269 type.go:168] "Request Body" body=""
	I1014 19:42:03.521443  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:03.521938  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:04.021944  437269 type.go:168] "Request Body" body=""
	I1014 19:42:04.022035  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:04.022414  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:04.522132  437269 type.go:168] "Request Body" body=""
	I1014 19:42:04.522227  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:04.522582  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:04.522653  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:05.021481  437269 type.go:168] "Request Body" body=""
	I1014 19:42:05.021561  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:05.021905  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:05.521556  437269 type.go:168] "Request Body" body=""
	I1014 19:42:05.521637  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:05.522027  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:06.021613  437269 type.go:168] "Request Body" body=""
	I1014 19:42:06.021699  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:06.022057  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:06.521633  437269 type.go:168] "Request Body" body=""
	I1014 19:42:06.521719  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:06.522075  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:07.021749  437269 type.go:168] "Request Body" body=""
	I1014 19:42:07.021848  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:07.022194  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:07.022260  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:07.521871  437269 type.go:168] "Request Body" body=""
	I1014 19:42:07.521957  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:07.522300  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:08.021955  437269 type.go:168] "Request Body" body=""
	I1014 19:42:08.022031  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:08.022379  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:08.522039  437269 type.go:168] "Request Body" body=""
	I1014 19:42:08.522117  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:08.522476  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:09.022164  437269 type.go:168] "Request Body" body=""
	I1014 19:42:09.022254  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:09.022634  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:09.022701  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:09.521239  437269 type.go:168] "Request Body" body=""
	I1014 19:42:09.521333  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:09.521715  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:10.021732  437269 type.go:168] "Request Body" body=""
	I1014 19:42:10.021859  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:10.022260  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:10.521865  437269 type.go:168] "Request Body" body=""
	I1014 19:42:10.521952  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:10.522296  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:11.021963  437269 type.go:168] "Request Body" body=""
	I1014 19:42:11.022051  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:11.022419  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:11.522129  437269 type.go:168] "Request Body" body=""
	I1014 19:42:11.522219  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:11.522604  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:11.522681  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:12.022256  437269 type.go:168] "Request Body" body=""
	I1014 19:42:12.022343  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:12.022700  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:12.521278  437269 type.go:168] "Request Body" body=""
	I1014 19:42:12.521359  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:12.521732  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:13.022114  437269 type.go:168] "Request Body" body=""
	I1014 19:42:13.022198  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:13.022561  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:13.522240  437269 type.go:168] "Request Body" body=""
	I1014 19:42:13.522319  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:13.522711  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:13.522798  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:14.021579  437269 type.go:168] "Request Body" body=""
	I1014 19:42:14.021707  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:14.022154  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:14.521710  437269 type.go:168] "Request Body" body=""
	I1014 19:42:14.521880  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:14.522225  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:14.674573  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1014 19:42:14.729085  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:42:14.729138  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:42:14.729273  437269 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1014 19:42:15.021737  437269 type.go:168] "Request Body" body=""
	I1014 19:42:15.021834  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:15.022205  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:15.521930  437269 type.go:168] "Request Body" body=""
	I1014 19:42:15.522012  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:15.522372  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:16.022056  437269 type.go:168] "Request Body" body=""
	I1014 19:42:16.022143  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:16.022542  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:16.022609  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:16.522173  437269 type.go:168] "Request Body" body=""
	I1014 19:42:16.522253  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:16.522604  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:17.021294  437269 type.go:168] "Request Body" body=""
	I1014 19:42:17.021370  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:17.021733  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:17.521444  437269 type.go:168] "Request Body" body=""
	I1014 19:42:17.521548  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:17.521910  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:18.022124  437269 type.go:168] "Request Body" body=""
	I1014 19:42:18.022209  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:18.022551  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:18.022636  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:18.522199  437269 type.go:168] "Request Body" body=""
	I1014 19:42:18.522276  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:18.522605  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:19.022258  437269 type.go:168] "Request Body" body=""
	I1014 19:42:19.022337  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:19.022731  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:19.521317  437269 type.go:168] "Request Body" body=""
	I1014 19:42:19.521448  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:19.521836  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:20.021610  437269 type.go:168] "Request Body" body=""
	I1014 19:42:20.021710  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:20.022103  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:20.521709  437269 type.go:168] "Request Body" body=""
	I1014 19:42:20.521810  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:20.522173  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:20.522240  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:21.021782  437269 type.go:168] "Request Body" body=""
	I1014 19:42:21.021881  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:21.022300  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:21.521996  437269 type.go:168] "Request Body" body=""
	I1014 19:42:21.522075  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:21.522493  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:22.022092  437269 type.go:168] "Request Body" body=""
	I1014 19:42:22.022170  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:22.022570  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:22.522183  437269 type.go:168] "Request Body" body=""
	I1014 19:42:22.522272  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:22.522625  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:22.522688  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:23.021971  437269 type.go:168] "Request Body" body=""
	I1014 19:42:23.022063  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:23.022422  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:23.522081  437269 type.go:168] "Request Body" body=""
	I1014 19:42:23.522162  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:23.522509  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:24.022288  437269 type.go:168] "Request Body" body=""
	I1014 19:42:24.022385  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:24.022833  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:24.521351  437269 type.go:168] "Request Body" body=""
	I1014 19:42:24.521424  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:24.521791  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:25.021730  437269 type.go:168] "Request Body" body=""
	I1014 19:42:25.021831  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:25.022212  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:25.022288  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:25.521848  437269 type.go:168] "Request Body" body=""
	I1014 19:42:25.521952  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:25.522288  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:25.661672  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 19:42:25.715017  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:42:25.717809  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:42:25.717938  437269 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1014 19:42:25.719888  437269 out.go:179] * Enabled addons: 
	I1014 19:42:25.722455  437269 addons.go:514] duration metric: took 1m51.818834592s for enable addons: enabled=[]
	I1014 19:42:26.021269  437269 type.go:168] "Request Body" body=""
	I1014 19:42:26.021349  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:26.021816  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:26.521369  437269 type.go:168] "Request Body" body=""
	I1014 19:42:26.521477  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:26.521916  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:27.021507  437269 type.go:168] "Request Body" body=""
	I1014 19:42:27.021605  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:27.021991  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:27.521602  437269 type.go:168] "Request Body" body=""
	I1014 19:42:27.521721  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:27.522084  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:27.522146  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:28.021642  437269 type.go:168] "Request Body" body=""
	I1014 19:42:28.021743  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:28.022116  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:28.521702  437269 type.go:168] "Request Body" body=""
	I1014 19:42:28.521807  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:28.522163  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:29.021797  437269 type.go:168] "Request Body" body=""
	I1014 19:42:29.021903  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:29.022267  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:29.522074  437269 type.go:168] "Request Body" body=""
	I1014 19:42:29.522173  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:29.522553  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:29.522671  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:30.021560  437269 type.go:168] "Request Body" body=""
	I1014 19:42:30.021654  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:30.022115  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:30.521649  437269 type.go:168] "Request Body" body=""
	I1014 19:42:30.521743  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:30.522178  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:31.021725  437269 type.go:168] "Request Body" body=""
	I1014 19:42:31.021826  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:31.022186  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:31.521880  437269 type.go:168] "Request Body" body=""
	I1014 19:42:31.521996  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:31.522379  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:32.021983  437269 type.go:168] "Request Body" body=""
	I1014 19:42:32.022060  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:32.022435  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:32.022510  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:32.522077  437269 type.go:168] "Request Body" body=""
	I1014 19:42:32.522170  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:32.522524  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:33.022165  437269 type.go:168] "Request Body" body=""
	I1014 19:42:33.022248  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:33.022592  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:33.521797  437269 type.go:168] "Request Body" body=""
	I1014 19:42:33.522204  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:33.522657  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:34.021345  437269 type.go:168] "Request Body" body=""
	I1014 19:42:34.021435  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:34.021864  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:34.521442  437269 type.go:168] "Request Body" body=""
	I1014 19:42:34.521536  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:34.521932  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:34.522018  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:35.021950  437269 type.go:168] "Request Body" body=""
	I1014 19:42:35.022028  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:35.022451  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:35.521247  437269 type.go:168] "Request Body" body=""
	I1014 19:42:35.521354  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:35.521837  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:36.021379  437269 type.go:168] "Request Body" body=""
	I1014 19:42:36.021471  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:36.021855  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:36.521476  437269 type.go:168] "Request Body" body=""
	I1014 19:42:36.521569  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:36.521989  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:36.522059  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:37.021550  437269 type.go:168] "Request Body" body=""
	I1014 19:42:37.021627  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:37.022016  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:37.521641  437269 type.go:168] "Request Body" body=""
	I1014 19:42:37.521743  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:37.522187  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:38.021859  437269 type.go:168] "Request Body" body=""
	I1014 19:42:38.021939  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:38.022324  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:38.521989  437269 type.go:168] "Request Body" body=""
	I1014 19:42:38.522080  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:38.522434  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:38.522503  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:39.022081  437269 type.go:168] "Request Body" body=""
	I1014 19:42:39.022165  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:39.022503  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:39.522189  437269 type.go:168] "Request Body" body=""
	I1014 19:42:39.522287  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:39.522650  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:40.021651  437269 type.go:168] "Request Body" body=""
	I1014 19:42:40.021735  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:40.022128  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:40.521658  437269 type.go:168] "Request Body" body=""
	I1014 19:42:40.521778  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:40.522143  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:41.021691  437269 type.go:168] "Request Body" body=""
	I1014 19:42:41.021793  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:41.022157  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:41.022225  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:41.521808  437269 type.go:168] "Request Body" body=""
	I1014 19:42:41.521901  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:41.522267  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:42.021874  437269 type.go:168] "Request Body" body=""
	I1014 19:42:42.021955  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:42.022329  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:42.521975  437269 type.go:168] "Request Body" body=""
	I1014 19:42:42.522059  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:42.522405  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:43.022032  437269 type.go:168] "Request Body" body=""
	I1014 19:42:43.022120  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:43.022486  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:43.022552  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:43.522253  437269 type.go:168] "Request Body" body=""
	I1014 19:42:43.522342  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:43.522709  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:44.021548  437269 type.go:168] "Request Body" body=""
	I1014 19:42:44.021646  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:44.022079  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:44.521677  437269 type.go:168] "Request Body" body=""
	I1014 19:42:44.521784  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:44.522202  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:45.022110  437269 type.go:168] "Request Body" body=""
	I1014 19:42:45.022196  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:45.022558  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:45.022661  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:45.522180  437269 type.go:168] "Request Body" body=""
	I1014 19:42:45.522266  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:45.522677  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:46.021229  437269 type.go:168] "Request Body" body=""
	I1014 19:42:46.021324  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:46.021716  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:46.521270  437269 type.go:168] "Request Body" body=""
	I1014 19:42:46.521348  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:46.521722  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:47.021311  437269 type.go:168] "Request Body" body=""
	I1014 19:42:47.021390  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:47.021779  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:47.521354  437269 type.go:168] "Request Body" body=""
	I1014 19:42:47.521433  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:47.521823  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:47.521900  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:48.021360  437269 type.go:168] "Request Body" body=""
	I1014 19:42:48.021443  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:48.021837  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:48.521366  437269 type.go:168] "Request Body" body=""
	I1014 19:42:48.521469  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:48.521854  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:49.022003  437269 type.go:168] "Request Body" body=""
	I1014 19:42:49.022085  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:49.022428  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:49.522046  437269 type.go:168] "Request Body" body=""
	I1014 19:42:49.522124  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:49.522478  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:49.522562  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:50.021433  437269 type.go:168] "Request Body" body=""
	I1014 19:42:50.021542  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:50.021987  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:50.521590  437269 type.go:168] "Request Body" body=""
	I1014 19:42:50.521671  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:50.521991  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:51.021671  437269 type.go:168] "Request Body" body=""
	I1014 19:42:51.021778  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:51.022149  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:51.521719  437269 type.go:168] "Request Body" body=""
	I1014 19:42:51.521832  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:51.522215  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:52.021893  437269 type.go:168] "Request Body" body=""
	I1014 19:42:52.021987  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:52.022342  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:52.022411  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:52.522080  437269 type.go:168] "Request Body" body=""
	I1014 19:42:52.522183  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:52.522617  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:53.022238  437269 type.go:168] "Request Body" body=""
	I1014 19:42:53.022323  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:53.022716  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:53.521304  437269 type.go:168] "Request Body" body=""
	I1014 19:42:53.521390  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:53.521854  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:54.021685  437269 type.go:168] "Request Body" body=""
	I1014 19:42:54.021789  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:54.022166  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:54.521747  437269 type.go:168] "Request Body" body=""
	I1014 19:42:54.521851  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:54.522275  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:54.522352  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:55.022087  437269 type.go:168] "Request Body" body=""
	I1014 19:42:55.022177  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:55.022557  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:55.522187  437269 type.go:168] "Request Body" body=""
	I1014 19:42:55.522285  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:55.522718  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:56.021281  437269 type.go:168] "Request Body" body=""
	I1014 19:42:56.021383  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:56.021840  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:56.521354  437269 type.go:168] "Request Body" body=""
	I1014 19:42:56.521430  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:56.521815  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:57.021386  437269 type.go:168] "Request Body" body=""
	I1014 19:42:57.021483  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:57.021914  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:57.021999  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:57.521600  437269 type.go:168] "Request Body" body=""
	I1014 19:42:57.521687  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:57.522087  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:58.021700  437269 type.go:168] "Request Body" body=""
	I1014 19:42:58.021799  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:58.022207  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:58.521870  437269 type.go:168] "Request Body" body=""
	I1014 19:42:58.521949  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:58.522303  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:59.021970  437269 type.go:168] "Request Body" body=""
	I1014 19:42:59.022045  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:59.022443  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:59.022507  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:59.522038  437269 type.go:168] "Request Body" body=""
	I1014 19:42:59.522131  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:59.522484  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:00.021506  437269 type.go:168] "Request Body" body=""
	I1014 19:43:00.021597  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:00.021981  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:00.521539  437269 type.go:168] "Request Body" body=""
	I1014 19:43:00.521625  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:00.522005  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:01.021567  437269 type.go:168] "Request Body" body=""
	I1014 19:43:01.021646  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:01.022034  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:01.521607  437269 type.go:168] "Request Body" body=""
	I1014 19:43:01.521699  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:01.522086  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:01.522169  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:02.021674  437269 type.go:168] "Request Body" body=""
	I1014 19:43:02.021771  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:02.022118  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:02.521701  437269 type.go:168] "Request Body" body=""
	I1014 19:43:02.521802  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:02.522123  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:03.021671  437269 type.go:168] "Request Body" body=""
	I1014 19:43:03.021748  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:03.022117  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:03.521807  437269 type.go:168] "Request Body" body=""
	I1014 19:43:03.521898  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:03.522297  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:03.522377  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:04.021247  437269 type.go:168] "Request Body" body=""
	I1014 19:43:04.021333  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:04.021730  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:04.521290  437269 type.go:168] "Request Body" body=""
	I1014 19:43:04.521389  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:04.521814  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:05.021660  437269 type.go:168] "Request Body" body=""
	I1014 19:43:05.021743  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:05.022150  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:05.521749  437269 type.go:168] "Request Body" body=""
	I1014 19:43:05.521888  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:05.522240  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:06.021896  437269 type.go:168] "Request Body" body=""
	I1014 19:43:06.021996  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:06.022415  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:06.022501  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:06.522060  437269 type.go:168] "Request Body" body=""
	I1014 19:43:06.522142  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:06.522496  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:07.022152  437269 type.go:168] "Request Body" body=""
	I1014 19:43:07.022255  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:07.022672  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:07.521243  437269 type.go:168] "Request Body" body=""
	I1014 19:43:07.521325  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:07.521730  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:08.021306  437269 type.go:168] "Request Body" body=""
	I1014 19:43:08.021386  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:08.021797  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:08.521379  437269 type.go:168] "Request Body" body=""
	I1014 19:43:08.521475  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:08.521854  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:08.521921  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:09.021427  437269 type.go:168] "Request Body" body=""
	I1014 19:43:09.021525  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:09.021943  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:09.521610  437269 type.go:168] "Request Body" body=""
	I1014 19:43:09.521709  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:09.522074  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:10.021890  437269 type.go:168] "Request Body" body=""
	I1014 19:43:10.021973  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:10.022317  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:10.522040  437269 type.go:168] "Request Body" body=""
	I1014 19:43:10.522122  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:10.522464  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:10.522545  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:11.021678  437269 type.go:168] "Request Body" body=""
	I1014 19:43:11.021775  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:11.022124  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:11.521786  437269 type.go:168] "Request Body" body=""
	I1014 19:43:11.521865  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:11.522285  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:12.021630  437269 type.go:168] "Request Body" body=""
	I1014 19:43:12.021721  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:12.022083  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:12.521655  437269 type.go:168] "Request Body" body=""
	I1014 19:43:12.521751  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:12.522185  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:13.021857  437269 type.go:168] "Request Body" body=""
	I1014 19:43:13.021947  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:13.022329  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:13.022419  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:13.521998  437269 type.go:168] "Request Body" body=""
	I1014 19:43:13.522076  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:13.522428  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:14.022232  437269 type.go:168] "Request Body" body=""
	I1014 19:43:14.022315  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:14.022692  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:14.521299  437269 type.go:168] "Request Body" body=""
	I1014 19:43:14.521379  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:14.521818  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:15.021769  437269 type.go:168] "Request Body" body=""
	I1014 19:43:15.021869  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:15.022238  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:15.521883  437269 type.go:168] "Request Body" body=""
	I1014 19:43:15.521969  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:15.522302  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:15.522372  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:16.021990  437269 type.go:168] "Request Body" body=""
	I1014 19:43:16.022071  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:16.022459  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:16.522107  437269 type.go:168] "Request Body" body=""
	I1014 19:43:16.522190  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:16.522527  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:17.022255  437269 type.go:168] "Request Body" body=""
	I1014 19:43:17.022335  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:17.022728  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:17.521281  437269 type.go:168] "Request Body" body=""
	I1014 19:43:17.521369  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:17.521726  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:18.021392  437269 type.go:168] "Request Body" body=""
	I1014 19:43:18.021485  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:18.021932  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:18.022012  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:18.521618  437269 type.go:168] "Request Body" body=""
	I1014 19:43:18.521708  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:18.522112  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:19.021718  437269 type.go:168] "Request Body" body=""
	I1014 19:43:19.021829  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:19.022200  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:19.521926  437269 type.go:168] "Request Body" body=""
	I1014 19:43:19.522009  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:19.522391  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:20.021218  437269 type.go:168] "Request Body" body=""
	I1014 19:43:20.021308  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:20.021706  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:20.521306  437269 type.go:168] "Request Body" body=""
	I1014 19:43:20.521386  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:20.521816  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:20.521893  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:21.021342  437269 type.go:168] "Request Body" body=""
	I1014 19:43:21.021427  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:21.021835  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:21.521377  437269 type.go:168] "Request Body" body=""
	I1014 19:43:21.521483  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:21.521876  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:22.021433  437269 type.go:168] "Request Body" body=""
	I1014 19:43:22.021530  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:22.021848  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:22.521448  437269 type.go:168] "Request Body" body=""
	I1014 19:43:22.521550  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:22.521980  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:22.522047  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:23.021566  437269 type.go:168] "Request Body" body=""
	I1014 19:43:23.021671  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:23.022058  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:23.521627  437269 type.go:168] "Request Body" body=""
	I1014 19:43:23.521736  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:23.522126  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:24.022029  437269 type.go:168] "Request Body" body=""
	I1014 19:43:24.022121  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:24.022504  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:24.522205  437269 type.go:168] "Request Body" body=""
	I1014 19:43:24.522294  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:24.522686  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:24.522787  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:25.021717  437269 type.go:168] "Request Body" body=""
	I1014 19:43:25.021820  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:25.022213  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:25.521882  437269 type.go:168] "Request Body" body=""
	I1014 19:43:25.521969  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:25.522345  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:26.021966  437269 type.go:168] "Request Body" body=""
	I1014 19:43:26.022053  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:26.022395  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:26.522078  437269 type.go:168] "Request Body" body=""
	I1014 19:43:26.522167  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:26.522591  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:27.022256  437269 type.go:168] "Request Body" body=""
	I1014 19:43:27.022347  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:27.022787  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:27.022856  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:27.521335  437269 type.go:168] "Request Body" body=""
	I1014 19:43:27.521438  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:27.521885  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:28.021454  437269 type.go:168] "Request Body" body=""
	I1014 19:43:28.021560  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:28.021963  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:28.521548  437269 type.go:168] "Request Body" body=""
	I1014 19:43:28.521631  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:28.522049  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:29.021606  437269 type.go:168] "Request Body" body=""
	I1014 19:43:29.021709  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:29.022129  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:29.521791  437269 type.go:168] "Request Body" body=""
	I1014 19:43:29.521879  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:29.522325  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:29.522390  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:30.022166  437269 type.go:168] "Request Body" body=""
	I1014 19:43:30.022260  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:30.022687  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:30.522272  437269 type.go:168] "Request Body" body=""
	I1014 19:43:30.522355  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:30.522747  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:31.021385  437269 type.go:168] "Request Body" body=""
	I1014 19:43:31.021484  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:31.021909  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:31.521491  437269 type.go:168] "Request Body" body=""
	I1014 19:43:31.521578  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:31.522023  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:32.021606  437269 type.go:168] "Request Body" body=""
	I1014 19:43:32.021692  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:32.022091  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:32.022172  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:32.521661  437269 type.go:168] "Request Body" body=""
	I1014 19:43:32.521740  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:32.522158  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:33.021717  437269 type.go:168] "Request Body" body=""
	I1014 19:43:33.021815  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:33.022209  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:33.521885  437269 type.go:168] "Request Body" body=""
	I1014 19:43:33.521973  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:33.522384  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:34.021211  437269 type.go:168] "Request Body" body=""
	I1014 19:43:34.021293  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:34.021699  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:34.521252  437269 type.go:168] "Request Body" body=""
	I1014 19:43:34.521332  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:34.521740  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:34.521854  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:35.021628  437269 type.go:168] "Request Body" body=""
	I1014 19:43:35.021734  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:35.022103  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:35.521777  437269 type.go:168] "Request Body" body=""
	I1014 19:43:35.521861  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:35.522282  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:36.021901  437269 type.go:168] "Request Body" body=""
	I1014 19:43:36.021991  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:36.022338  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:36.522081  437269 type.go:168] "Request Body" body=""
	I1014 19:43:36.522161  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:36.522532  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:36.522600  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:37.022222  437269 type.go:168] "Request Body" body=""
	I1014 19:43:37.022306  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:37.022680  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:37.521261  437269 type.go:168] "Request Body" body=""
	I1014 19:43:37.521365  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:37.521784  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:38.021342  437269 type.go:168] "Request Body" body=""
	I1014 19:43:38.021427  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:38.021897  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:38.521489  437269 type.go:168] "Request Body" body=""
	I1014 19:43:38.521583  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:38.521930  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:39.021573  437269 type.go:168] "Request Body" body=""
	I1014 19:43:39.021673  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:39.022106  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:39.022190  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:39.521695  437269 type.go:168] "Request Body" body=""
	I1014 19:43:39.521806  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:39.522190  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:40.022070  437269 type.go:168] "Request Body" body=""
	I1014 19:43:40.022155  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:40.022515  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:40.522191  437269 type.go:168] "Request Body" body=""
	I1014 19:43:40.522278  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:40.522665  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:41.021264  437269 type.go:168] "Request Body" body=""
	I1014 19:43:41.021347  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:41.021730  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:41.521285  437269 type.go:168] "Request Body" body=""
	I1014 19:43:41.521368  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:41.521747  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:41.521850  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:42.021332  437269 type.go:168] "Request Body" body=""
	I1014 19:43:42.021413  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:42.021835  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:42.521390  437269 type.go:168] "Request Body" body=""
	I1014 19:43:42.521492  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:42.521872  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:43.021448  437269 type.go:168] "Request Body" body=""
	I1014 19:43:43.021551  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:43.021984  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:43.521527  437269 type.go:168] "Request Body" body=""
	I1014 19:43:43.521610  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:43.521979  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:43.522054  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:44.021891  437269 type.go:168] "Request Body" body=""
	I1014 19:43:44.021982  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:44.022346  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:44.522015  437269 type.go:168] "Request Body" body=""
	I1014 19:43:44.522103  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:44.522480  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:45.021474  437269 type.go:168] "Request Body" body=""
	I1014 19:43:45.021561  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:45.021945  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:45.521543  437269 type.go:168] "Request Body" body=""
	I1014 19:43:45.521646  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:45.522059  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:45.522127  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:46.021638  437269 type.go:168] "Request Body" body=""
	I1014 19:43:46.021729  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:46.022191  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:46.521736  437269 type.go:168] "Request Body" body=""
	I1014 19:43:46.521839  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:46.522226  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:47.021891  437269 type.go:168] "Request Body" body=""
	I1014 19:43:47.021986  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:47.022382  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:47.522067  437269 type.go:168] "Request Body" body=""
	I1014 19:43:47.522151  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:47.522552  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:47.522621  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:48.022193  437269 type.go:168] "Request Body" body=""
	I1014 19:43:48.022285  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:48.022636  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:48.521224  437269 type.go:168] "Request Body" body=""
	I1014 19:43:48.521322  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:48.521716  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:49.021262  437269 type.go:168] "Request Body" body=""
	I1014 19:43:49.021340  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:49.021716  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:49.521334  437269 type.go:168] "Request Body" body=""
	I1014 19:43:49.521413  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:49.521823  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:50.021743  437269 type.go:168] "Request Body" body=""
	I1014 19:43:50.021874  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:50.022283  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:50.022349  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:50.521963  437269 type.go:168] "Request Body" body=""
	I1014 19:43:50.522049  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:50.522461  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:51.022176  437269 type.go:168] "Request Body" body=""
	I1014 19:43:51.022266  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:51.022629  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:51.522282  437269 type.go:168] "Request Body" body=""
	I1014 19:43:51.522383  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:51.522865  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:52.021416  437269 type.go:168] "Request Body" body=""
	I1014 19:43:52.021507  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:52.021884  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:52.521517  437269 type.go:168] "Request Body" body=""
	I1014 19:43:52.521611  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:52.522082  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:52.522155  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:53.021656  437269 type.go:168] "Request Body" body=""
	I1014 19:43:53.021742  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:53.022136  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:53.521806  437269 type.go:168] "Request Body" body=""
	I1014 19:43:53.521891  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:53.522261  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:54.022341  437269 type.go:168] "Request Body" body=""
	I1014 19:43:54.022440  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:54.022890  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:54.521448  437269 type.go:168] "Request Body" body=""
	I1014 19:43:54.521552  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:54.521966  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:55.021854  437269 type.go:168] "Request Body" body=""
	I1014 19:43:55.021934  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:55.022336  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:55.022402  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:55.521987  437269 type.go:168] "Request Body" body=""
	I1014 19:43:55.522071  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:55.522460  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:56.022232  437269 type.go:168] "Request Body" body=""
	I1014 19:43:56.022316  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:56.022653  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:56.521227  437269 type.go:168] "Request Body" body=""
	I1014 19:43:56.521302  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:56.521701  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:57.021269  437269 type.go:168] "Request Body" body=""
	I1014 19:43:57.021349  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:57.021719  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:57.521302  437269 type.go:168] "Request Body" body=""
	I1014 19:43:57.521398  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:57.521838  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:57.521899  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:58.021391  437269 type.go:168] "Request Body" body=""
	I1014 19:43:58.021485  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:58.021875  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:58.521454  437269 type.go:168] "Request Body" body=""
	I1014 19:43:58.521550  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:58.521987  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:59.021602  437269 type.go:168] "Request Body" body=""
	I1014 19:43:59.021701  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:59.022089  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:59.521704  437269 type.go:168] "Request Body" body=""
	I1014 19:43:59.521805  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:59.522205  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:59.522272  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:00.022040  437269 type.go:168] "Request Body" body=""
	I1014 19:44:00.022132  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:00.022504  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:00.522200  437269 type.go:168] "Request Body" body=""
	I1014 19:44:00.522297  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:00.522735  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:01.021297  437269 type.go:168] "Request Body" body=""
	I1014 19:44:01.021387  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:01.021784  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:01.521307  437269 type.go:168] "Request Body" body=""
	I1014 19:44:01.521399  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:01.521850  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:02.021406  437269 type.go:168] "Request Body" body=""
	I1014 19:44:02.021500  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:02.021877  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:02.021945  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:02.521436  437269 type.go:168] "Request Body" body=""
	I1014 19:44:02.521539  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:02.521953  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:03.021516  437269 type.go:168] "Request Body" body=""
	I1014 19:44:03.021598  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:03.022005  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:03.521561  437269 type.go:168] "Request Body" body=""
	I1014 19:44:03.521646  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:03.522077  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:04.021994  437269 type.go:168] "Request Body" body=""
	I1014 19:44:04.022079  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:04.022499  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:04.022572  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:04.522163  437269 type.go:168] "Request Body" body=""
	I1014 19:44:04.522255  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:04.522672  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:05.021565  437269 type.go:168] "Request Body" body=""
	I1014 19:44:05.021656  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:05.022053  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:05.521629  437269 type.go:168] "Request Body" body=""
	I1014 19:44:05.521713  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:05.522128  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:06.021689  437269 type.go:168] "Request Body" body=""
	I1014 19:44:06.021801  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:06.022188  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:06.521851  437269 type.go:168] "Request Body" body=""
	I1014 19:44:06.521937  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:06.522347  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:06.522417  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:07.022007  437269 type.go:168] "Request Body" body=""
	I1014 19:44:07.022086  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:07.022436  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:07.522203  437269 type.go:168] "Request Body" body=""
	I1014 19:44:07.522282  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:07.522638  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:08.021309  437269 type.go:168] "Request Body" body=""
	I1014 19:44:08.021397  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:08.021803  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:08.521985  437269 type.go:168] "Request Body" body=""
	I1014 19:44:08.522062  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:08.522422  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:08.522484  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:09.022109  437269 type.go:168] "Request Body" body=""
	I1014 19:44:09.022199  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:09.022550  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:09.522226  437269 type.go:168] "Request Body" body=""
	I1014 19:44:09.522312  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:09.522687  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:10.021566  437269 type.go:168] "Request Body" body=""
	I1014 19:44:10.021708  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:10.022064  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:10.521657  437269 type.go:168] "Request Body" body=""
	I1014 19:44:10.521776  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:10.522143  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:11.021701  437269 type.go:168] "Request Body" body=""
	I1014 19:44:11.021797  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:11.022127  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:11.022194  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:11.521807  437269 type.go:168] "Request Body" body=""
	I1014 19:44:11.521884  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:11.522263  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:12.021962  437269 type.go:168] "Request Body" body=""
	I1014 19:44:12.022049  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:12.022424  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:12.522133  437269 type.go:168] "Request Body" body=""
	I1014 19:44:12.522233  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:12.522615  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:13.022268  437269 type.go:168] "Request Body" body=""
	I1014 19:44:13.022358  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:13.022774  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:13.022845  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:13.521351  437269 type.go:168] "Request Body" body=""
	I1014 19:44:13.521431  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:13.521806  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:14.021818  437269 type.go:168] "Request Body" body=""
	I1014 19:44:14.021912  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:14.022342  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:14.522064  437269 type.go:168] "Request Body" body=""
	I1014 19:44:14.522156  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:14.522518  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:15.021381  437269 type.go:168] "Request Body" body=""
	I1014 19:44:15.021468  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:15.021826  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:15.521382  437269 type.go:168] "Request Body" body=""
	I1014 19:44:15.521487  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:15.521856  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:15.521934  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:16.021382  437269 type.go:168] "Request Body" body=""
	I1014 19:44:16.021472  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:16.021855  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:16.521402  437269 type.go:168] "Request Body" body=""
	I1014 19:44:16.521496  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:16.521958  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:17.021537  437269 type.go:168] "Request Body" body=""
	I1014 19:44:17.021618  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:17.022006  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:17.521572  437269 type.go:168] "Request Body" body=""
	I1014 19:44:17.521652  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:17.522068  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:17.522135  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:18.021636  437269 type.go:168] "Request Body" body=""
	I1014 19:44:18.021735  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:18.022112  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:18.521664  437269 type.go:168] "Request Body" body=""
	I1014 19:44:18.521790  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:18.522173  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:19.021791  437269 type.go:168] "Request Body" body=""
	I1014 19:44:19.021887  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:19.022264  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:19.521890  437269 type.go:168] "Request Body" body=""
	I1014 19:44:19.521989  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:19.522366  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:19.522432  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:20.022234  437269 type.go:168] "Request Body" body=""
	I1014 19:44:20.022313  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:20.022654  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:20.521239  437269 type.go:168] "Request Body" body=""
	I1014 19:44:20.521321  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:20.521737  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:21.021357  437269 type.go:168] "Request Body" body=""
	I1014 19:44:21.021447  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:21.021856  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:21.521454  437269 type.go:168] "Request Body" body=""
	I1014 19:44:21.521555  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:21.521969  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:22.021534  437269 type.go:168] "Request Body" body=""
	I1014 19:44:22.021630  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:22.022029  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:22.022098  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:22.521619  437269 type.go:168] "Request Body" body=""
	I1014 19:44:22.521729  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:22.522128  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:23.021712  437269 type.go:168] "Request Body" body=""
	I1014 19:44:23.021820  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:23.022176  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:23.521802  437269 type.go:168] "Request Body" body=""
	I1014 19:44:23.521885  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:23.522258  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:24.022112  437269 type.go:168] "Request Body" body=""
	I1014 19:44:24.022201  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:24.022532  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:24.022600  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:24.522195  437269 type.go:168] "Request Body" body=""
	I1014 19:44:24.522287  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:24.522634  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:25.021596  437269 type.go:168] "Request Body" body=""
	I1014 19:44:25.021676  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:25.022088  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:25.521654  437269 type.go:168] "Request Body" body=""
	I1014 19:44:25.521741  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:25.522131  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:26.021684  437269 type.go:168] "Request Body" body=""
	I1014 19:44:26.021798  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:26.022168  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:26.521801  437269 type.go:168] "Request Body" body=""
	I1014 19:44:26.521880  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:26.522232  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:26.522299  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:27.021847  437269 type.go:168] "Request Body" body=""
	I1014 19:44:27.021933  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:27.022292  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:27.521878  437269 type.go:168] "Request Body" body=""
	I1014 19:44:27.521963  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:27.522328  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:28.021519  437269 type.go:168] "Request Body" body=""
	I1014 19:44:28.021599  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:28.021968  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:28.521573  437269 type.go:168] "Request Body" body=""
	I1014 19:44:28.521667  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:28.522077  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:29.021709  437269 type.go:168] "Request Body" body=""
	I1014 19:44:29.021839  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:29.022235  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:29.022308  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:29.521910  437269 type.go:168] "Request Body" body=""
	I1014 19:44:29.522006  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:29.522371  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:30.021252  437269 type.go:168] "Request Body" body=""
	I1014 19:44:30.021348  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:30.021744  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:30.521308  437269 type.go:168] "Request Body" body=""
	I1014 19:44:30.521407  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:30.521858  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:31.021447  437269 type.go:168] "Request Body" body=""
	I1014 19:44:31.021537  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:31.021993  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:31.521577  437269 type.go:168] "Request Body" body=""
	I1014 19:44:31.521661  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:31.522091  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:31.522171  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:32.021679  437269 type.go:168] "Request Body" body=""
	I1014 19:44:32.021778  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:32.022180  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:32.521862  437269 type.go:168] "Request Body" body=""
	I1014 19:44:32.521962  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:32.522305  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:33.022031  437269 type.go:168] "Request Body" body=""
	I1014 19:44:33.022124  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:33.022484  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:33.522216  437269 type.go:168] "Request Body" body=""
	I1014 19:44:33.522294  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:33.522643  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:33.522730  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:34.021707  437269 type.go:168] "Request Body" body=""
	I1014 19:44:34.021853  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:34.022332  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:34.522025  437269 type.go:168] "Request Body" body=""
	I1014 19:44:34.522147  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:34.522536  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:35.021511  437269 type.go:168] "Request Body" body=""
	I1014 19:44:35.021620  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:35.022043  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:35.522236  437269 type.go:168] "Request Body" body=""
	I1014 19:44:35.522316  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:35.522681  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:36.021229  437269 type.go:168] "Request Body" body=""
	I1014 19:44:36.021313  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:36.021734  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:36.021830  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:36.521316  437269 type.go:168] "Request Body" body=""
	I1014 19:44:36.521393  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:36.521798  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:37.021352  437269 type.go:168] "Request Body" body=""
	I1014 19:44:37.021434  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:37.021888  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:37.521479  437269 type.go:168] "Request Body" body=""
	I1014 19:44:37.521566  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:37.521949  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:38.021522  437269 type.go:168] "Request Body" body=""
	I1014 19:44:38.021608  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:38.022020  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:38.022085  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:38.521582  437269 type.go:168] "Request Body" body=""
	I1014 19:44:38.521671  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:38.522063  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:39.021622  437269 type.go:168] "Request Body" body=""
	I1014 19:44:39.021702  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:39.022125  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:39.521740  437269 type.go:168] "Request Body" body=""
	I1014 19:44:39.521841  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:39.522231  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:40.022072  437269 type.go:168] "Request Body" body=""
	I1014 19:44:40.022157  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:40.022496  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:40.022560  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:40.522145  437269 type.go:168] "Request Body" body=""
	I1014 19:44:40.522230  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:40.522581  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:41.021191  437269 type.go:168] "Request Body" body=""
	I1014 19:44:41.021271  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:41.021663  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:41.521242  437269 type.go:168] "Request Body" body=""
	I1014 19:44:41.521325  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:41.521677  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:42.021221  437269 type.go:168] "Request Body" body=""
	I1014 19:44:42.021300  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:42.021721  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:42.521295  437269 type.go:168] "Request Body" body=""
	I1014 19:44:42.521377  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:42.521793  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:42.521860  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:43.021377  437269 type.go:168] "Request Body" body=""
	I1014 19:44:43.021470  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:43.021882  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:43.521445  437269 type.go:168] "Request Body" body=""
	I1014 19:44:43.521535  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:43.521905  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:44.021811  437269 type.go:168] "Request Body" body=""
	I1014 19:44:44.021903  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:44.022312  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:44.521977  437269 type.go:168] "Request Body" body=""
	I1014 19:44:44.522062  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:44.522405  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:44.522472  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:45.021229  437269 type.go:168] "Request Body" body=""
	I1014 19:44:45.021316  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:45.021700  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:45.521363  437269 type.go:168] "Request Body" body=""
	I1014 19:44:45.521476  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:45.521862  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:46.021400  437269 type.go:168] "Request Body" body=""
	I1014 19:44:46.021493  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:46.021898  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:46.521589  437269 type.go:168] "Request Body" body=""
	I1014 19:44:46.521682  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:46.522048  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:47.021649  437269 type.go:168] "Request Body" body=""
	I1014 19:44:47.021730  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:47.022119  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:47.022190  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:47.521670  437269 type.go:168] "Request Body" body=""
	I1014 19:44:47.521746  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:47.522086  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:48.021745  437269 type.go:168] "Request Body" body=""
	I1014 19:44:48.021850  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:48.022200  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:48.521828  437269 type.go:168] "Request Body" body=""
	I1014 19:44:48.521908  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:48.522263  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:49.021930  437269 type.go:168] "Request Body" body=""
	I1014 19:44:49.022025  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:49.022391  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:49.022471  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:49.522012  437269 type.go:168] "Request Body" body=""
	I1014 19:44:49.522093  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:49.522436  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:50.021280  437269 type.go:168] "Request Body" body=""
	I1014 19:44:50.021359  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:50.021746  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:50.521299  437269 type.go:168] "Request Body" body=""
	I1014 19:44:50.521381  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:50.521749  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:51.021292  437269 type.go:168] "Request Body" body=""
	I1014 19:44:51.021375  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:51.021830  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:51.521389  437269 type.go:168] "Request Body" body=""
	I1014 19:44:51.521483  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:51.521862  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:51.521938  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:52.021392  437269 type.go:168] "Request Body" body=""
	I1014 19:44:52.021501  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:52.021933  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:52.521524  437269 type.go:168] "Request Body" body=""
	I1014 19:44:52.521606  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:52.522002  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:53.021549  437269 type.go:168] "Request Body" body=""
	I1014 19:44:53.021630  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:53.022033  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:53.521638  437269 type.go:168] "Request Body" body=""
	I1014 19:44:53.521719  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:53.522129  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:53.522202  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:54.022063  437269 type.go:168] "Request Body" body=""
	I1014 19:44:54.022155  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:54.022563  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:54.522249  437269 type.go:168] "Request Body" body=""
	I1014 19:44:54.522346  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:54.522749  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:55.021666  437269 type.go:168] "Request Body" body=""
	I1014 19:44:55.021750  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:55.022126  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:55.521738  437269 type.go:168] "Request Body" body=""
	I1014 19:44:55.521847  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:55.522237  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:55.522304  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:56.021875  437269 type.go:168] "Request Body" body=""
	I1014 19:44:56.021958  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:56.022317  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:56.521953  437269 type.go:168] "Request Body" body=""
	I1014 19:44:56.522031  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:56.522402  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:57.022099  437269 type.go:168] "Request Body" body=""
	I1014 19:44:57.022184  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:57.022571  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:57.522215  437269 type.go:168] "Request Body" body=""
	I1014 19:44:57.522295  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:57.522635  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:57.522721  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:58.021247  437269 type.go:168] "Request Body" body=""
	I1014 19:44:58.021331  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:58.021778  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:58.521330  437269 type.go:168] "Request Body" body=""
	I1014 19:44:58.521406  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:58.521792  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:59.021307  437269 type.go:168] "Request Body" body=""
	I1014 19:44:59.021390  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:59.021783  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:59.521317  437269 type.go:168] "Request Body" body=""
	I1014 19:44:59.521404  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:59.521833  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:00.021727  437269 type.go:168] "Request Body" body=""
	I1014 19:45:00.021828  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:00.022220  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:00.022290  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:00.521874  437269 type.go:168] "Request Body" body=""
	I1014 19:45:00.521969  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:00.522342  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:01.022108  437269 type.go:168] "Request Body" body=""
	I1014 19:45:01.022195  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:01.022598  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:01.521221  437269 type.go:168] "Request Body" body=""
	I1014 19:45:01.521312  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:01.521684  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:02.021247  437269 type.go:168] "Request Body" body=""
	I1014 19:45:02.021345  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:02.021741  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:02.521281  437269 type.go:168] "Request Body" body=""
	I1014 19:45:02.521368  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:02.521783  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:02.521850  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:03.021427  437269 type.go:168] "Request Body" body=""
	I1014 19:45:03.021538  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:03.022017  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:03.521576  437269 type.go:168] "Request Body" body=""
	I1014 19:45:03.521665  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:03.522065  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:04.021968  437269 type.go:168] "Request Body" body=""
	I1014 19:45:04.022064  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:04.022412  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:04.522089  437269 type.go:168] "Request Body" body=""
	I1014 19:45:04.522186  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:04.522588  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:04.522669  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:05.021532  437269 type.go:168] "Request Body" body=""
	I1014 19:45:05.021627  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:05.022032  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:05.521660  437269 type.go:168] "Request Body" body=""
	I1014 19:45:05.521743  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:05.522144  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:06.021836  437269 type.go:168] "Request Body" body=""
	I1014 19:45:06.021915  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:06.022313  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:06.522006  437269 type.go:168] "Request Body" body=""
	I1014 19:45:06.522090  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:06.522505  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:07.022194  437269 type.go:168] "Request Body" body=""
	I1014 19:45:07.022282  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:07.022657  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:07.022726  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:07.522255  437269 type.go:168] "Request Body" body=""
	I1014 19:45:07.522341  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:07.522733  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:08.021293  437269 type.go:168] "Request Body" body=""
	I1014 19:45:08.021376  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:08.021784  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:08.521329  437269 type.go:168] "Request Body" body=""
	I1014 19:45:08.521407  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:08.521815  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:09.021335  437269 type.go:168] "Request Body" body=""
	I1014 19:45:09.021426  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:09.021821  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:09.521354  437269 type.go:168] "Request Body" body=""
	I1014 19:45:09.521433  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:09.521870  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:09.521948  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:10.021750  437269 type.go:168] "Request Body" body=""
	I1014 19:45:10.021864  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:10.022248  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:10.521887  437269 type.go:168] "Request Body" body=""
	I1014 19:45:10.521973  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:10.522362  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:11.022015  437269 type.go:168] "Request Body" body=""
	I1014 19:45:11.022096  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:11.022432  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:11.522073  437269 type.go:168] "Request Body" body=""
	I1014 19:45:11.522158  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:11.522547  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:11.522623  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:12.022259  437269 type.go:168] "Request Body" body=""
	I1014 19:45:12.022347  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:12.022850  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:12.521359  437269 type.go:168] "Request Body" body=""
	I1014 19:45:12.521448  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:12.521849  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:13.021409  437269 type.go:168] "Request Body" body=""
	I1014 19:45:13.021494  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:13.021916  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:13.521532  437269 type.go:168] "Request Body" body=""
	I1014 19:45:13.521618  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:13.521981  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:14.021969  437269 type.go:168] "Request Body" body=""
	I1014 19:45:14.022049  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:14.022447  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:14.022510  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:14.522094  437269 type.go:168] "Request Body" body=""
	I1014 19:45:14.522176  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:14.522545  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:15.021509  437269 type.go:168] "Request Body" body=""
	I1014 19:45:15.021606  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:15.022033  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:15.521593  437269 type.go:168] "Request Body" body=""
	I1014 19:45:15.521690  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:15.522096  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:16.021646  437269 type.go:168] "Request Body" body=""
	I1014 19:45:16.021736  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:16.022135  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:16.521804  437269 type.go:168] "Request Body" body=""
	I1014 19:45:16.521890  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:16.522248  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:16.522324  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:17.021975  437269 type.go:168] "Request Body" body=""
	I1014 19:45:17.022056  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:17.022447  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:17.522108  437269 type.go:168] "Request Body" body=""
	I1014 19:45:17.522191  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:17.522594  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:18.022251  437269 type.go:168] "Request Body" body=""
	I1014 19:45:18.022333  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:18.022725  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:18.521289  437269 type.go:168] "Request Body" body=""
	I1014 19:45:18.521376  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:18.521812  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:19.021383  437269 type.go:168] "Request Body" body=""
	I1014 19:45:19.021484  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:19.021904  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:19.021980  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:19.521516  437269 type.go:168] "Request Body" body=""
	I1014 19:45:19.521604  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:19.522056  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:20.021651  437269 type.go:168] "Request Body" body=""
	I1014 19:45:20.021732  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:20.022182  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:20.521732  437269 type.go:168] "Request Body" body=""
	I1014 19:45:20.521838  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:20.522198  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:21.021907  437269 type.go:168] "Request Body" body=""
	I1014 19:45:21.021987  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:21.022351  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:21.022430  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:21.521976  437269 type.go:168] "Request Body" body=""
	I1014 19:45:21.522056  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:21.522417  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:22.022086  437269 type.go:168] "Request Body" body=""
	I1014 19:45:22.022170  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:22.022544  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:22.522193  437269 type.go:168] "Request Body" body=""
	I1014 19:45:22.522282  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:22.522668  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:23.021253  437269 type.go:168] "Request Body" body=""
	I1014 19:45:23.021333  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:23.021784  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:23.521356  437269 type.go:168] "Request Body" body=""
	I1014 19:45:23.521450  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:23.521977  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:23.522059  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:24.021741  437269 type.go:168] "Request Body" body=""
	I1014 19:45:24.021842  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:24.022224  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:24.521890  437269 type.go:168] "Request Body" body=""
	I1014 19:45:24.521984  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:24.522357  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:25.022258  437269 type.go:168] "Request Body" body=""
	I1014 19:45:25.022360  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:25.022739  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:25.521985  437269 type.go:168] "Request Body" body=""
	I1014 19:45:25.522068  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:25.522428  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:25.522491  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:26.022071  437269 type.go:168] "Request Body" body=""
	I1014 19:45:26.022170  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:26.022519  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:26.521198  437269 type.go:168] "Request Body" body=""
	I1014 19:45:26.521288  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:26.521676  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:27.021978  437269 type.go:168] "Request Body" body=""
	I1014 19:45:27.022083  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:27.022419  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:27.522151  437269 type.go:168] "Request Body" body=""
	I1014 19:45:27.522230  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:27.522643  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:27.522714  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:28.021218  437269 type.go:168] "Request Body" body=""
	I1014 19:45:28.021312  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:28.021730  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:28.521312  437269 type.go:168] "Request Body" body=""
	I1014 19:45:28.521403  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:28.521840  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:29.021354  437269 type.go:168] "Request Body" body=""
	I1014 19:45:29.021435  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:29.021854  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:29.521378  437269 type.go:168] "Request Body" body=""
	I1014 19:45:29.521458  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:29.521850  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:30.021662  437269 type.go:168] "Request Body" body=""
	I1014 19:45:30.021789  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:30.022146  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:30.022213  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:30.521738  437269 type.go:168] "Request Body" body=""
	I1014 19:45:30.521833  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:30.522211  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:31.021880  437269 type.go:168] "Request Body" body=""
	I1014 19:45:31.021993  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:31.022332  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:31.522123  437269 type.go:168] "Request Body" body=""
	I1014 19:45:31.522204  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:31.522575  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:32.022205  437269 type.go:168] "Request Body" body=""
	I1014 19:45:32.022295  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:32.022647  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:32.022725  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:32.521198  437269 type.go:168] "Request Body" body=""
	I1014 19:45:32.521290  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:32.521668  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:33.021206  437269 type.go:168] "Request Body" body=""
	I1014 19:45:33.021284  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:33.021669  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:33.521252  437269 type.go:168] "Request Body" body=""
	I1014 19:45:33.521335  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:33.521732  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:34.021648  437269 type.go:168] "Request Body" body=""
	I1014 19:45:34.021738  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:34.022124  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:34.521677  437269 type.go:168] "Request Body" body=""
	I1014 19:45:34.521786  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:34.522167  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:34.522228  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:35.021984  437269 type.go:168] "Request Body" body=""
	I1014 19:45:35.022074  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:35.022422  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:35.522074  437269 type.go:168] "Request Body" body=""
	I1014 19:45:35.522161  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:35.522560  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:36.022246  437269 type.go:168] "Request Body" body=""
	I1014 19:45:36.022332  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:36.022735  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:36.521326  437269 type.go:168] "Request Body" body=""
	I1014 19:45:36.521412  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:36.521843  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:37.021388  437269 type.go:168] "Request Body" body=""
	I1014 19:45:37.021485  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:37.021891  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:37.021957  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:37.521503  437269 type.go:168] "Request Body" body=""
	I1014 19:45:37.521585  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:37.522005  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:38.021579  437269 type.go:168] "Request Body" body=""
	I1014 19:45:38.021679  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:38.022059  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:38.521663  437269 type.go:168] "Request Body" body=""
	I1014 19:45:38.521751  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:38.522160  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:39.021909  437269 type.go:168] "Request Body" body=""
	I1014 19:45:39.021996  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:39.022378  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:39.022449  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:39.522030  437269 type.go:168] "Request Body" body=""
	I1014 19:45:39.522107  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:39.522416  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:40.021388  437269 type.go:168] "Request Body" body=""
	I1014 19:45:40.021481  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:40.021844  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:40.521422  437269 type.go:168] "Request Body" body=""
	I1014 19:45:40.521523  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:40.521966  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:41.021564  437269 type.go:168] "Request Body" body=""
	I1014 19:45:41.021641  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:41.022031  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:41.521648  437269 type.go:168] "Request Body" body=""
	I1014 19:45:41.521734  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:41.522167  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:41.522236  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:42.021731  437269 type.go:168] "Request Body" body=""
	I1014 19:45:42.021836  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:42.022192  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:42.521731  437269 type.go:168] "Request Body" body=""
	I1014 19:45:42.521839  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:42.522217  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:43.021906  437269 type.go:168] "Request Body" body=""
	I1014 19:45:43.021993  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:43.022331  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:43.522111  437269 type.go:168] "Request Body" body=""
	I1014 19:45:43.522198  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:43.522589  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:43.522675  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:44.021291  437269 type.go:168] "Request Body" body=""
	I1014 19:45:44.021372  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:44.021800  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:44.521363  437269 type.go:168] "Request Body" body=""
	I1014 19:45:44.521449  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:44.521869  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:45.021752  437269 type.go:168] "Request Body" body=""
	I1014 19:45:45.021850  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:45.022233  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:45.521855  437269 type.go:168] "Request Body" body=""
	I1014 19:45:45.521941  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:45.522316  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:46.022006  437269 type.go:168] "Request Body" body=""
	I1014 19:45:46.022095  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:46.022499  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:46.022579  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:46.522210  437269 type.go:168] "Request Body" body=""
	I1014 19:45:46.522318  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:46.522722  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:47.021283  437269 type.go:168] "Request Body" body=""
	I1014 19:45:47.021385  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:47.021781  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:47.521429  437269 type.go:168] "Request Body" body=""
	I1014 19:45:47.521536  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:47.521995  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:48.021575  437269 type.go:168] "Request Body" body=""
	I1014 19:45:48.021686  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:48.022099  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:48.521787  437269 type.go:168] "Request Body" body=""
	I1014 19:45:48.521871  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:48.522261  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:48.522369  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:49.021944  437269 type.go:168] "Request Body" body=""
	I1014 19:45:49.022027  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:49.022513  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:49.522168  437269 type.go:168] "Request Body" body=""
	I1014 19:45:49.522247  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:49.522598  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:50.021501  437269 type.go:168] "Request Body" body=""
	I1014 19:45:50.021615  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:50.022004  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:50.521581  437269 type.go:168] "Request Body" body=""
	I1014 19:45:50.521669  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:50.522045  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:51.021656  437269 type.go:168] "Request Body" body=""
	I1014 19:45:51.021788  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:51.022144  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:51.022212  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:51.521847  437269 type.go:168] "Request Body" body=""
	I1014 19:45:51.521925  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:51.522299  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:52.022088  437269 type.go:168] "Request Body" body=""
	I1014 19:45:52.022197  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:52.022587  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:52.522247  437269 type.go:168] "Request Body" body=""
	I1014 19:45:52.522330  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:52.522658  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:53.021334  437269 type.go:168] "Request Body" body=""
	I1014 19:45:53.021438  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:53.021860  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:53.521371  437269 type.go:168] "Request Body" body=""
	I1014 19:45:53.521458  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:53.521812  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:53.521887  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:54.021737  437269 type.go:168] "Request Body" body=""
	I1014 19:45:54.021853  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:54.022236  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:54.521871  437269 type.go:168] "Request Body" body=""
	I1014 19:45:54.521952  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:54.522300  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:55.022188  437269 type.go:168] "Request Body" body=""
	I1014 19:45:55.022267  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:55.022698  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:55.521299  437269 type.go:168] "Request Body" body=""
	I1014 19:45:55.521387  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:55.521745  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:56.021324  437269 type.go:168] "Request Body" body=""
	I1014 19:45:56.021405  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:56.021853  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:56.021933  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:56.521381  437269 type.go:168] "Request Body" body=""
	I1014 19:45:56.521492  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:56.521856  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:57.021449  437269 type.go:168] "Request Body" body=""
	I1014 19:45:57.021569  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:57.022053  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:57.521631  437269 type.go:168] "Request Body" body=""
	I1014 19:45:57.521711  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:57.522096  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:58.021695  437269 type.go:168] "Request Body" body=""
	I1014 19:45:58.021812  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:58.022220  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:58.022300  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:58.521874  437269 type.go:168] "Request Body" body=""
	I1014 19:45:58.521965  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:58.522333  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:59.021991  437269 type.go:168] "Request Body" body=""
	I1014 19:45:59.022083  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:59.022475  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:59.522167  437269 type.go:168] "Request Body" body=""
	I1014 19:45:59.522245  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:59.522597  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:00.021599  437269 type.go:168] "Request Body" body=""
	I1014 19:46:00.021701  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:00.022127  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:00.521743  437269 type.go:168] "Request Body" body=""
	I1014 19:46:00.521861  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:00.522238  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:46:00.522338  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:46:01.022015  437269 type.go:168] "Request Body" body=""
	I1014 19:46:01.022109  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:01.022496  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:01.522199  437269 type.go:168] "Request Body" body=""
	I1014 19:46:01.522284  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:01.522792  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:02.021313  437269 type.go:168] "Request Body" body=""
	I1014 19:46:02.021414  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:02.021802  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:02.521355  437269 type.go:168] "Request Body" body=""
	I1014 19:46:02.521435  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:02.521837  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:03.021400  437269 type.go:168] "Request Body" body=""
	I1014 19:46:03.021512  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:03.021843  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:46:03.021936  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:46:03.521495  437269 type.go:168] "Request Body" body=""
	I1014 19:46:03.521638  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:03.522055  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:04.022126  437269 type.go:168] "Request Body" body=""
	I1014 19:46:04.022216  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:04.022594  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:04.522216  437269 type.go:168] "Request Body" body=""
	I1014 19:46:04.522303  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:04.522679  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:05.021591  437269 type.go:168] "Request Body" body=""
	I1014 19:46:05.021704  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:05.022095  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:46:05.022161  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:46:05.521689  437269 type.go:168] "Request Body" body=""
	I1014 19:46:05.521808  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:05.522192  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:06.021790  437269 type.go:168] "Request Body" body=""
	I1014 19:46:06.021897  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:06.022280  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:06.521951  437269 type.go:168] "Request Body" body=""
	I1014 19:46:06.522040  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:06.522397  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:07.022069  437269 type.go:168] "Request Body" body=""
	I1014 19:46:07.022173  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:07.022542  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:46:07.022606  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:46:07.522218  437269 type.go:168] "Request Body" body=""
	I1014 19:46:07.522298  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:07.522637  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:08.021220  437269 type.go:168] "Request Body" body=""
	I1014 19:46:08.021314  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:08.021696  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:08.521279  437269 type.go:168] "Request Body" body=""
	I1014 19:46:08.521359  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:08.521778  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:09.021343  437269 type.go:168] "Request Body" body=""
	I1014 19:46:09.021451  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:09.021866  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:09.521382  437269 type.go:168] "Request Body" body=""
	I1014 19:46:09.521459  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:09.521838  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:46:09.521913  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:46:10.021664  437269 type.go:168] "Request Body" body=""
	I1014 19:46:10.021744  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:10.022128  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:10.521668  437269 type.go:168] "Request Body" body=""
	I1014 19:46:10.521745  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:10.522134  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:11.021709  437269 type.go:168] "Request Body" body=""
	I1014 19:46:11.021817  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:11.022226  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:11.521863  437269 type.go:168] "Request Body" body=""
	I1014 19:46:11.521950  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:11.522316  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:46:11.522391  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:46:12.022004  437269 type.go:168] "Request Body" body=""
	I1014 19:46:12.022083  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:12.022466  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:12.522152  437269 type.go:168] "Request Body" body=""
	I1014 19:46:12.522231  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:12.522572  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:13.022208  437269 type.go:168] "Request Body" body=""
	I1014 19:46:13.022306  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:13.022686  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:13.521212  437269 type.go:168] "Request Body" body=""
	I1014 19:46:13.521286  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:13.521620  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:14.021358  437269 type.go:168] "Request Body" body=""
	I1014 19:46:14.021443  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:14.021869  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:46:14.021948  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:46:14.521427  437269 type.go:168] "Request Body" body=""
	I1014 19:46:14.521526  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:14.521830  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:15.021689  437269 type.go:168] "Request Body" body=""
	I1014 19:46:15.021842  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:15.022202  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:15.521922  437269 type.go:168] "Request Body" body=""
	I1014 19:46:15.522020  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:15.522429  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:16.022119  437269 type.go:168] "Request Body" body=""
	I1014 19:46:16.022199  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:16.022517  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:46:16.022586  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:46:16.521207  437269 type.go:168] "Request Body" body=""
	I1014 19:46:16.521315  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:16.521711  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:17.021272  437269 type.go:168] "Request Body" body=""
	I1014 19:46:17.021355  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:17.021723  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:17.521289  437269 type.go:168] "Request Body" body=""
	I1014 19:46:17.521390  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:17.521811  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:18.021359  437269 type.go:168] "Request Body" body=""
	I1014 19:46:18.021443  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:18.021849  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:18.521429  437269 type.go:168] "Request Body" body=""
	I1014 19:46:18.521529  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:18.521905  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:46:18.521988  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:46:19.021521  437269 type.go:168] "Request Body" body=""
	I1014 19:46:19.021615  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:19.022010  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:19.521715  437269 type.go:168] "Request Body" body=""
	I1014 19:46:19.521866  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:19.522297  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:20.022176  437269 type.go:168] "Request Body" body=""
	I1014 19:46:20.022258  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:20.022646  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:20.522243  437269 type.go:168] "Request Body" body=""
	I1014 19:46:20.522333  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:20.522713  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:46:20.522805  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:46:21.021280  437269 type.go:168] "Request Body" body=""
	I1014 19:46:21.021386  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:21.021805  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:21.521347  437269 type.go:168] "Request Body" body=""
	I1014 19:46:21.521438  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:21.521811  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:22.021364  437269 type.go:168] "Request Body" body=""
	I1014 19:46:22.021456  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:22.021861  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:22.521399  437269 type.go:168] "Request Body" body=""
	I1014 19:46:22.521520  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:22.521917  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:23.021531  437269 type.go:168] "Request Body" body=""
	I1014 19:46:23.021637  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:23.022036  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:46:23.022100  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:46:23.521619  437269 type.go:168] "Request Body" body=""
	I1014 19:46:23.521711  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:23.522062  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:24.021884  437269 type.go:168] "Request Body" body=""
	I1014 19:46:24.021977  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:24.022350  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:24.522011  437269 type.go:168] "Request Body" body=""
	I1014 19:46:24.522097  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:24.522508  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:25.021512  437269 type.go:168] "Request Body" body=""
	I1014 19:46:25.021596  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:25.022033  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:25.521632  437269 type.go:168] "Request Body" body=""
	I1014 19:46:25.521726  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:25.522148  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:46:25.522244  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:46:26.021740  437269 type.go:168] "Request Body" body=""
	I1014 19:46:26.021850  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:26.022219  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:26.521873  437269 type.go:168] "Request Body" body=""
	I1014 19:46:26.521956  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:26.522372  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:27.022036  437269 type.go:168] "Request Body" body=""
	I1014 19:46:27.022129  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:27.022489  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:27.522188  437269 type.go:168] "Request Body" body=""
	I1014 19:46:27.522279  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:27.522655  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:46:27.522745  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:46:28.021236  437269 type.go:168] "Request Body" body=""
	I1014 19:46:28.021317  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:28.021676  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:28.521949  437269 type.go:168] "Request Body" body=""
	I1014 19:46:28.522027  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:28.522409  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:29.022101  437269 type.go:168] "Request Body" body=""
	I1014 19:46:29.022190  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:29.022539  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:29.522171  437269 type.go:168] "Request Body" body=""
	I1014 19:46:29.522256  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:29.522639  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:30.021643  437269 type.go:168] "Request Body" body=""
	I1014 19:46:30.021778  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:30.022144  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:46:30.022208  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:46:30.521811  437269 type.go:168] "Request Body" body=""
	I1014 19:46:30.521894  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:30.522289  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:31.022066  437269 type.go:168] "Request Body" body=""
	I1014 19:46:31.022164  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:31.022558  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:31.522208  437269 type.go:168] "Request Body" body=""
	I1014 19:46:31.522295  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:31.522719  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:32.021314  437269 type.go:168] "Request Body" body=""
	I1014 19:46:32.021414  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:32.021832  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:32.521364  437269 type.go:168] "Request Body" body=""
	I1014 19:46:32.521461  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:32.521854  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:46:32.521920  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:46:33.021401  437269 type.go:168] "Request Body" body=""
	I1014 19:46:33.021513  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:33.022010  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:33.521545  437269 type.go:168] "Request Body" body=""
	I1014 19:46:33.521653  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:33.522075  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:34.021736  437269 type.go:168] "Request Body" body=""
	I1014 19:46:34.022027  437269 node_ready.go:38] duration metric: took 6m0.00093705s for node "functional-744288" to be "Ready" ...
	I1014 19:46:34.025220  437269 out.go:203] 
	W1014 19:46:34.026860  437269 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1014 19:46:34.026878  437269 out.go:285] * 
	* 
	W1014 19:46:34.028574  437269 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 19:46:34.030019  437269 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:676: failed to soft start minikube. args "out/minikube-linux-amd64 start -p functional-744288 --alsologtostderr -v=8": exit status 80
functional_test.go:678: soft start took 6m4.526008953s for "functional-744288" cluster.
I1014 19:46:34.479790  417373 config.go:182] Loaded profile config "functional-744288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/serial/SoftStart]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/serial/SoftStart]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-744288
helpers_test.go:243: (dbg) docker inspect functional-744288:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ca55a7aa3ac1a055c5c96669019f558795a313505e680734901ed4465642a23a",
	        "Created": "2025-10-14T19:32:11.700856501Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 432350,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-14T19:32:11.736879267Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/ca55a7aa3ac1a055c5c96669019f558795a313505e680734901ed4465642a23a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ca55a7aa3ac1a055c5c96669019f558795a313505e680734901ed4465642a23a/hostname",
	        "HostsPath": "/var/lib/docker/containers/ca55a7aa3ac1a055c5c96669019f558795a313505e680734901ed4465642a23a/hosts",
	        "LogPath": "/var/lib/docker/containers/ca55a7aa3ac1a055c5c96669019f558795a313505e680734901ed4465642a23a/ca55a7aa3ac1a055c5c96669019f558795a313505e680734901ed4465642a23a-json.log",
	        "Name": "/functional-744288",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-744288:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-744288",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ca55a7aa3ac1a055c5c96669019f558795a313505e680734901ed4465642a23a",
	                "LowerDir": "/var/lib/docker/overlay2/7474e2653fad8f96d0b2edb2947dba52f432e8f6324d88224718eafd377e5e8b-init/diff:/var/lib/docker/overlay2/51cca15559205c73df9571f03495ed8a29f085405673af5fef2c1ba7c695d8af/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7474e2653fad8f96d0b2edb2947dba52f432e8f6324d88224718eafd377e5e8b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7474e2653fad8f96d0b2edb2947dba52f432e8f6324d88224718eafd377e5e8b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7474e2653fad8f96d0b2edb2947dba52f432e8f6324d88224718eafd377e5e8b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-744288",
	                "Source": "/var/lib/docker/volumes/functional-744288/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-744288",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-744288",
	                "name.minikube.sigs.k8s.io": "functional-744288",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "697bcb3f8caffcc21274484944886a08e42751728789c447339536a0c178eab6",
	            "SandboxKey": "/var/run/docker/netns/697bcb3f8caf",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32898"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32899"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32902"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32900"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32901"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-744288": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b6:cd:04:2d:45:3c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a03a168f8787c5e4cc68b4fb2601645260a265eddce7ef101976f25cd32bdabb",
	                    "EndpointID": "c7c49faacc3b14ce4dd3f84ad70993167a6a8b1490739ae12d7ca1980d176ee7",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-744288",
	                        "ca55a7aa3ac1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-744288 -n functional-744288
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-744288 -n functional-744288: exit status 2 (322.15255ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctional/serial/SoftStart FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/serial/SoftStart]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-744288 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-744288 logs -n 25: (1.022636284s)
helpers_test.go:260: TestFunctional/serial/SoftStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-102449                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-102449   │ jenkins │ v1.37.0 │ 14 Oct 25 19:14 UTC │ 14 Oct 25 19:14 UTC │
	│ start   │ --download-only -p download-docker-042272 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-042272 │ jenkins │ v1.37.0 │ 14 Oct 25 19:14 UTC │                     │
	│ delete  │ -p download-docker-042272                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-042272 │ jenkins │ v1.37.0 │ 14 Oct 25 19:14 UTC │ 14 Oct 25 19:14 UTC │
	│ start   │ --download-only -p binary-mirror-194366 --alsologtostderr --binary-mirror http://127.0.0.1:45401 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-194366   │ jenkins │ v1.37.0 │ 14 Oct 25 19:14 UTC │                     │
	│ delete  │ -p binary-mirror-194366                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-194366   │ jenkins │ v1.37.0 │ 14 Oct 25 19:14 UTC │ 14 Oct 25 19:14 UTC │
	│ addons  │ enable dashboard -p addons-995790                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-995790          │ jenkins │ v1.37.0 │ 14 Oct 25 19:14 UTC │                     │
	│ addons  │ disable dashboard -p addons-995790                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-995790          │ jenkins │ v1.37.0 │ 14 Oct 25 19:14 UTC │                     │
	│ start   │ -p addons-995790 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-995790          │ jenkins │ v1.37.0 │ 14 Oct 25 19:14 UTC │                     │
	│ delete  │ -p addons-995790                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-995790          │ jenkins │ v1.37.0 │ 14 Oct 25 19:23 UTC │ 14 Oct 25 19:23 UTC │
	│ start   │ -p nospam-442016 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-442016 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                  │ nospam-442016          │ jenkins │ v1.37.0 │ 14 Oct 25 19:23 UTC │                     │
	│ start   │ nospam-442016 --log_dir /tmp/nospam-442016 start --dry-run                                                                                                                                                                                                                                                                                                                                                                                                               │ nospam-442016          │ jenkins │ v1.37.0 │ 14 Oct 25 19:31 UTC │                     │
	│ start   │ nospam-442016 --log_dir /tmp/nospam-442016 start --dry-run                                                                                                                                                                                                                                                                                                                                                                                                               │ nospam-442016          │ jenkins │ v1.37.0 │ 14 Oct 25 19:31 UTC │                     │
	│ start   │ nospam-442016 --log_dir /tmp/nospam-442016 start --dry-run                                                                                                                                                                                                                                                                                                                                                                                                               │ nospam-442016          │ jenkins │ v1.37.0 │ 14 Oct 25 19:31 UTC │                     │
	│ pause   │ nospam-442016 --log_dir /tmp/nospam-442016 pause                                                                                                                                                                                                                                                                                                                                                                                                                         │ nospam-442016          │ jenkins │ v1.37.0 │ 14 Oct 25 19:31 UTC │ 14 Oct 25 19:31 UTC │
	│ pause   │ nospam-442016 --log_dir /tmp/nospam-442016 pause                                                                                                                                                                                                                                                                                                                                                                                                                         │ nospam-442016          │ jenkins │ v1.37.0 │ 14 Oct 25 19:31 UTC │ 14 Oct 25 19:31 UTC │
	│ pause   │ nospam-442016 --log_dir /tmp/nospam-442016 pause                                                                                                                                                                                                                                                                                                                                                                                                                         │ nospam-442016          │ jenkins │ v1.37.0 │ 14 Oct 25 19:31 UTC │ 14 Oct 25 19:31 UTC │
	│ unpause │ nospam-442016 --log_dir /tmp/nospam-442016 unpause                                                                                                                                                                                                                                                                                                                                                                                                                       │ nospam-442016          │ jenkins │ v1.37.0 │ 14 Oct 25 19:31 UTC │ 14 Oct 25 19:31 UTC │
	│ unpause │ nospam-442016 --log_dir /tmp/nospam-442016 unpause                                                                                                                                                                                                                                                                                                                                                                                                                       │ nospam-442016          │ jenkins │ v1.37.0 │ 14 Oct 25 19:31 UTC │ 14 Oct 25 19:31 UTC │
	│ unpause │ nospam-442016 --log_dir /tmp/nospam-442016 unpause                                                                                                                                                                                                                                                                                                                                                                                                                       │ nospam-442016          │ jenkins │ v1.37.0 │ 14 Oct 25 19:31 UTC │ 14 Oct 25 19:32 UTC │
	│ stop    │ nospam-442016 --log_dir /tmp/nospam-442016 stop                                                                                                                                                                                                                                                                                                                                                                                                                          │ nospam-442016          │ jenkins │ v1.37.0 │ 14 Oct 25 19:32 UTC │ 14 Oct 25 19:32 UTC │
	│ stop    │ nospam-442016 --log_dir /tmp/nospam-442016 stop                                                                                                                                                                                                                                                                                                                                                                                                                          │ nospam-442016          │ jenkins │ v1.37.0 │ 14 Oct 25 19:32 UTC │ 14 Oct 25 19:32 UTC │
	│ stop    │ nospam-442016 --log_dir /tmp/nospam-442016 stop                                                                                                                                                                                                                                                                                                                                                                                                                          │ nospam-442016          │ jenkins │ v1.37.0 │ 14 Oct 25 19:32 UTC │ 14 Oct 25 19:32 UTC │
	│ delete  │ -p nospam-442016                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ nospam-442016          │ jenkins │ v1.37.0 │ 14 Oct 25 19:32 UTC │ 14 Oct 25 19:32 UTC │
	│ start   │ -p functional-744288 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                            │ functional-744288      │ jenkins │ v1.37.0 │ 14 Oct 25 19:32 UTC │                     │
	│ start   │ -p functional-744288 --alsologtostderr -v=8                                                                                                                                                                                                                                                                                                                                                                                                                              │ functional-744288      │ jenkins │ v1.37.0 │ 14 Oct 25 19:40 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/14 19:40:29
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1014 19:40:29.999204  437269 out.go:360] Setting OutFile to fd 1 ...
	I1014 19:40:29.999451  437269 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 19:40:29.999459  437269 out.go:374] Setting ErrFile to fd 2...
	I1014 19:40:29.999463  437269 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 19:40:29.999664  437269 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-413763/.minikube/bin
	I1014 19:40:30.000162  437269 out.go:368] Setting JSON to false
	I1014 19:40:30.001140  437269 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":8576,"bootTime":1760462254,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1014 19:40:30.001253  437269 start.go:141] virtualization: kvm guest
	I1014 19:40:30.003929  437269 out.go:179] * [functional-744288] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1014 19:40:30.005394  437269 out.go:179]   - MINIKUBE_LOCATION=21409
	I1014 19:40:30.005413  437269 notify.go:220] Checking for updates...
	I1014 19:40:30.008578  437269 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 19:40:30.009922  437269 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-413763/kubeconfig
	I1014 19:40:30.011325  437269 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-413763/.minikube
	I1014 19:40:30.012721  437269 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1014 19:40:30.014074  437269 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 19:40:30.015738  437269 config.go:182] Loaded profile config "functional-744288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 19:40:30.015851  437269 driver.go:421] Setting default libvirt URI to qemu:///system
	I1014 19:40:30.041344  437269 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1014 19:40:30.041571  437269 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 19:40:30.106855  437269 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-14 19:40:30.095983875 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1014 19:40:30.106976  437269 docker.go:318] overlay module found
	I1014 19:40:30.108953  437269 out.go:179] * Using the docker driver based on existing profile
	I1014 19:40:30.110337  437269 start.go:305] selected driver: docker
	I1014 19:40:30.110363  437269 start.go:925] validating driver "docker" against &{Name:functional-744288 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-744288 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 19:40:30.110446  437269 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 19:40:30.110529  437269 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 19:40:30.176521  437269 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-14 19:40:30.165510899 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1014 19:40:30.177154  437269 cni.go:84] Creating CNI manager for ""
	I1014 19:40:30.177215  437269 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1014 19:40:30.177273  437269 start.go:349] cluster config:
	{Name:functional-744288 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-744288 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 19:40:30.179329  437269 out.go:179] * Starting "functional-744288" primary control-plane node in "functional-744288" cluster
	I1014 19:40:30.180795  437269 cache.go:123] Beginning downloading kic base image for docker with crio
	I1014 19:40:30.182356  437269 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1014 19:40:30.183701  437269 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 19:40:30.183742  437269 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-413763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1014 19:40:30.183752  437269 cache.go:58] Caching tarball of preloaded images
	I1014 19:40:30.183799  437269 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1014 19:40:30.183863  437269 preload.go:233] Found /home/jenkins/minikube-integration/21409-413763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1014 19:40:30.183877  437269 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1014 19:40:30.183979  437269 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/config.json ...
	I1014 19:40:30.204077  437269 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1014 19:40:30.204098  437269 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1014 19:40:30.204114  437269 cache.go:232] Successfully downloaded all kic artifacts
	I1014 19:40:30.204155  437269 start.go:360] acquireMachinesLock for functional-744288: {Name:mk27c3a9a4edec1c99a109c410361619ff35ec14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 19:40:30.204220  437269 start.go:364] duration metric: took 47.096µs to acquireMachinesLock for "functional-744288"
	I1014 19:40:30.204240  437269 start.go:96] Skipping create...Using existing machine configuration
	I1014 19:40:30.204245  437269 fix.go:54] fixHost starting: 
	I1014 19:40:30.204447  437269 cli_runner.go:164] Run: docker container inspect functional-744288 --format={{.State.Status}}
	I1014 19:40:30.222380  437269 fix.go:112] recreateIfNeeded on functional-744288: state=Running err=<nil>
	W1014 19:40:30.222430  437269 fix.go:138] unexpected machine state, will restart: <nil>
	I1014 19:40:30.224794  437269 out.go:252] * Updating the running docker "functional-744288" container ...
	I1014 19:40:30.224832  437269 machine.go:93] provisionDockerMachine start ...
	I1014 19:40:30.224915  437269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-744288
	I1014 19:40:30.243631  437269 main.go:141] libmachine: Using SSH client type: native
	I1014 19:40:30.243897  437269 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32898 <nil> <nil>}
	I1014 19:40:30.243914  437269 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 19:40:30.392088  437269 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-744288
	
	I1014 19:40:30.392121  437269 ubuntu.go:182] provisioning hostname "functional-744288"
	I1014 19:40:30.392200  437269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-744288
	I1014 19:40:30.410333  437269 main.go:141] libmachine: Using SSH client type: native
	I1014 19:40:30.410549  437269 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32898 <nil> <nil>}
	I1014 19:40:30.410563  437269 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-744288 && echo "functional-744288" | sudo tee /etc/hostname
	I1014 19:40:30.567306  437269 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-744288
	
	I1014 19:40:30.567398  437269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-744288
	I1014 19:40:30.585534  437269 main.go:141] libmachine: Using SSH client type: native
	I1014 19:40:30.585774  437269 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32898 <nil> <nil>}
	I1014 19:40:30.585794  437269 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-744288' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-744288/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-744288' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 19:40:30.733740  437269 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 19:40:30.733790  437269 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-413763/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-413763/.minikube}
	I1014 19:40:30.733813  437269 ubuntu.go:190] setting up certificates
	I1014 19:40:30.733825  437269 provision.go:84] configureAuth start
	I1014 19:40:30.733878  437269 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-744288
	I1014 19:40:30.751946  437269 provision.go:143] copyHostCerts
	I1014 19:40:30.751989  437269 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem
	I1014 19:40:30.752023  437269 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem, removing ...
	I1014 19:40:30.752048  437269 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem
	I1014 19:40:30.752133  437269 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem (1078 bytes)
	I1014 19:40:30.752237  437269 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem
	I1014 19:40:30.752267  437269 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem, removing ...
	I1014 19:40:30.752278  437269 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem
	I1014 19:40:30.752320  437269 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem (1123 bytes)
	I1014 19:40:30.752387  437269 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem
	I1014 19:40:30.752412  437269 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem, removing ...
	I1014 19:40:30.752422  437269 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem
	I1014 19:40:30.752463  437269 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem (1675 bytes)
	I1014 19:40:30.752709  437269 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca-key.pem org=jenkins.functional-744288 san=[127.0.0.1 192.168.49.2 functional-744288 localhost minikube]
	I1014 19:40:31.076864  437269 provision.go:177] copyRemoteCerts
	I1014 19:40:31.076930  437269 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 19:40:31.076971  437269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-744288
	I1014 19:40:31.095322  437269 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/functional-744288/id_rsa Username:docker}
	I1014 19:40:31.200396  437269 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1014 19:40:31.200473  437269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 19:40:31.218084  437269 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1014 19:40:31.218140  437269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1014 19:40:31.235905  437269 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1014 19:40:31.235974  437269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1014 19:40:31.253074  437269 provision.go:87] duration metric: took 519.232689ms to configureAuth
	I1014 19:40:31.253110  437269 ubuntu.go:206] setting minikube options for container-runtime
	I1014 19:40:31.253264  437269 config.go:182] Loaded profile config "functional-744288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 19:40:31.253357  437269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-744288
	I1014 19:40:31.271451  437269 main.go:141] libmachine: Using SSH client type: native
	I1014 19:40:31.271661  437269 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32898 <nil> <nil>}
	I1014 19:40:31.271677  437269 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 19:40:31.540521  437269 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 19:40:31.540549  437269 machine.go:96] duration metric: took 1.315709373s to provisionDockerMachine
	I1014 19:40:31.540561  437269 start.go:293] postStartSetup for "functional-744288" (driver="docker")
	I1014 19:40:31.540571  437269 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 19:40:31.540628  437269 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 19:40:31.540669  437269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-744288
	I1014 19:40:31.559297  437269 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/functional-744288/id_rsa Username:docker}
	I1014 19:40:31.665251  437269 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 19:40:31.669234  437269 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1014 19:40:31.669258  437269 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1014 19:40:31.669267  437269 command_runner.go:130] > VERSION_ID="12"
	I1014 19:40:31.669270  437269 command_runner.go:130] > VERSION="12 (bookworm)"
	I1014 19:40:31.669276  437269 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1014 19:40:31.669279  437269 command_runner.go:130] > ID=debian
	I1014 19:40:31.669283  437269 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1014 19:40:31.669288  437269 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1014 19:40:31.669293  437269 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1014 19:40:31.669341  437269 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1014 19:40:31.669359  437269 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1014 19:40:31.669371  437269 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-413763/.minikube/addons for local assets ...
	I1014 19:40:31.669425  437269 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-413763/.minikube/files for local assets ...
	I1014 19:40:31.669510  437269 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem -> 4173732.pem in /etc/ssl/certs
	I1014 19:40:31.669525  437269 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem -> /etc/ssl/certs/4173732.pem
	I1014 19:40:31.669592  437269 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/test/nested/copy/417373/hosts -> hosts in /etc/test/nested/copy/417373
	I1014 19:40:31.669600  437269 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/test/nested/copy/417373/hosts -> /etc/test/nested/copy/417373/hosts
	I1014 19:40:31.669633  437269 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/417373
	I1014 19:40:31.677988  437269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem --> /etc/ssl/certs/4173732.pem (1708 bytes)
	I1014 19:40:31.696543  437269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/test/nested/copy/417373/hosts --> /etc/test/nested/copy/417373/hosts (40 bytes)
	I1014 19:40:31.715275  437269 start.go:296] duration metric: took 174.687158ms for postStartSetup
	I1014 19:40:31.715383  437269 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1014 19:40:31.715428  437269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-744288
	I1014 19:40:31.734376  437269 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/functional-744288/id_rsa Username:docker}
	I1014 19:40:31.836456  437269 command_runner.go:130] > 39%
	I1014 19:40:31.836544  437269 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1014 19:40:31.841513  437269 command_runner.go:130] > 178G
	I1014 19:40:31.841552  437269 fix.go:56] duration metric: took 1.637302821s for fixHost
	I1014 19:40:31.841566  437269 start.go:83] releasing machines lock for "functional-744288", held for 1.637335022s
	I1014 19:40:31.841633  437269 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-744288
	I1014 19:40:31.859002  437269 ssh_runner.go:195] Run: cat /version.json
	I1014 19:40:31.859036  437269 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 19:40:31.859053  437269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-744288
	I1014 19:40:31.859093  437269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-744288
	I1014 19:40:31.877314  437269 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/functional-744288/id_rsa Username:docker}
	I1014 19:40:31.877547  437269 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/functional-744288/id_rsa Username:docker}
	I1014 19:40:31.978415  437269 command_runner.go:130] > {"iso_version": "v1.37.0-1758198818-20370", "kicbase_version": "v0.0.48-1759745255-21703", "minikube_version": "v1.37.0", "commit": "a51fe4b7ffc88febd8814e8831f38772e976d097"}
	I1014 19:40:31.978583  437269 ssh_runner.go:195] Run: systemctl --version
	I1014 19:40:32.030433  437269 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1014 19:40:32.032548  437269 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1014 19:40:32.032581  437269 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1014 19:40:32.032653  437269 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 19:40:32.071124  437269 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1014 19:40:32.075797  437269 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1014 19:40:32.076143  437269 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 19:40:32.076213  437269 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 19:40:32.084774  437269 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1014 19:40:32.084802  437269 start.go:495] detecting cgroup driver to use...
	I1014 19:40:32.084841  437269 detect.go:190] detected "systemd" cgroup driver on host os
	I1014 19:40:32.084885  437269 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 19:40:32.100807  437269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 19:40:32.114918  437269 docker.go:218] disabling cri-docker service (if available) ...
	I1014 19:40:32.115001  437269 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 19:40:32.131082  437269 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 19:40:32.145731  437269 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 19:40:32.234963  437269 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 19:40:32.329593  437269 docker.go:234] disabling docker service ...
	I1014 19:40:32.329671  437269 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 19:40:32.344729  437269 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 19:40:32.357712  437269 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 19:40:32.445038  437269 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 19:40:32.534134  437269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 19:40:32.547615  437269 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 19:40:32.562780  437269 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1014 19:40:32.562835  437269 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1014 19:40:32.562884  437269 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 19:40:32.572580  437269 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1014 19:40:32.572655  437269 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 19:40:32.581715  437269 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 19:40:32.590624  437269 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 19:40:32.599492  437269 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 19:40:32.607979  437269 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 19:40:32.617026  437269 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 19:40:32.625607  437269 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 19:40:32.634661  437269 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 19:40:32.642022  437269 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1014 19:40:32.642101  437269 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 19:40:32.649948  437269 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 19:40:32.737827  437269 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 19:40:32.854779  437269 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 19:40:32.854851  437269 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 19:40:32.859353  437269 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1014 19:40:32.859376  437269 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1014 19:40:32.859382  437269 command_runner.go:130] > Device: 0,59	Inode: 3887        Links: 1
	I1014 19:40:32.859389  437269 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1014 19:40:32.859394  437269 command_runner.go:130] > Access: 2025-10-14 19:40:32.837516724 +0000
	I1014 19:40:32.859399  437269 command_runner.go:130] > Modify: 2025-10-14 19:40:32.837516724 +0000
	I1014 19:40:32.859403  437269 command_runner.go:130] > Change: 2025-10-14 19:40:32.837516724 +0000
	I1014 19:40:32.859408  437269 command_runner.go:130] >  Birth: 2025-10-14 19:40:32.837516724 +0000
	I1014 19:40:32.859438  437269 start.go:563] Will wait 60s for crictl version
	I1014 19:40:32.859485  437269 ssh_runner.go:195] Run: which crictl
	I1014 19:40:32.863222  437269 command_runner.go:130] > /usr/local/bin/crictl
	I1014 19:40:32.863312  437269 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1014 19:40:32.889462  437269 command_runner.go:130] > Version:  0.1.0
	I1014 19:40:32.889482  437269 command_runner.go:130] > RuntimeName:  cri-o
	I1014 19:40:32.889486  437269 command_runner.go:130] > RuntimeVersion:  1.34.1
	I1014 19:40:32.889490  437269 command_runner.go:130] > RuntimeApiVersion:  v1
	I1014 19:40:32.889505  437269 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1014 19:40:32.889559  437269 ssh_runner.go:195] Run: crio --version
	I1014 19:40:32.920224  437269 command_runner.go:130] > crio version 1.34.1
	I1014 19:40:32.920251  437269 command_runner.go:130] >    GitCommit:      8e14bff4153ba033f12ed3ffa3cadaca5425b313
	I1014 19:40:32.920258  437269 command_runner.go:130] >    GitCommitDate:  2025-10-01T13:04:13Z
	I1014 19:40:32.920266  437269 command_runner.go:130] >    GitTreeState:   dirty
	I1014 19:40:32.920279  437269 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1014 19:40:32.920285  437269 command_runner.go:130] >    GoVersion:      go1.24.6
	I1014 19:40:32.920291  437269 command_runner.go:130] >    Compiler:       gc
	I1014 19:40:32.920303  437269 command_runner.go:130] >    Platform:       linux/amd64
	I1014 19:40:32.920312  437269 command_runner.go:130] >    Linkmode:       static
	I1014 19:40:32.920322  437269 command_runner.go:130] >    BuildTags:
	I1014 19:40:32.920332  437269 command_runner.go:130] >      static
	I1014 19:40:32.920340  437269 command_runner.go:130] >      netgo
	I1014 19:40:32.920347  437269 command_runner.go:130] >      osusergo
	I1014 19:40:32.920354  437269 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1014 19:40:32.920358  437269 command_runner.go:130] >      seccomp
	I1014 19:40:32.920361  437269 command_runner.go:130] >      apparmor
	I1014 19:40:32.920367  437269 command_runner.go:130] >      selinux
	I1014 19:40:32.920371  437269 command_runner.go:130] >    LDFlags:          unknown
	I1014 19:40:32.920379  437269 command_runner.go:130] >    SeccompEnabled:   true
	I1014 19:40:32.920383  437269 command_runner.go:130] >    AppArmorEnabled:  false
	I1014 19:40:32.920453  437269 ssh_runner.go:195] Run: crio --version
	I1014 19:40:32.949467  437269 command_runner.go:130] > crio version 1.34.1
	I1014 19:40:32.949490  437269 command_runner.go:130] >    GitCommit:      8e14bff4153ba033f12ed3ffa3cadaca5425b313
	I1014 19:40:32.949495  437269 command_runner.go:130] >    GitCommitDate:  2025-10-01T13:04:13Z
	I1014 19:40:32.949499  437269 command_runner.go:130] >    GitTreeState:   dirty
	I1014 19:40:32.949504  437269 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1014 19:40:32.949508  437269 command_runner.go:130] >    GoVersion:      go1.24.6
	I1014 19:40:32.949514  437269 command_runner.go:130] >    Compiler:       gc
	I1014 19:40:32.949525  437269 command_runner.go:130] >    Platform:       linux/amd64
	I1014 19:40:32.949534  437269 command_runner.go:130] >    Linkmode:       static
	I1014 19:40:32.949540  437269 command_runner.go:130] >    BuildTags:
	I1014 19:40:32.949546  437269 command_runner.go:130] >      static
	I1014 19:40:32.949555  437269 command_runner.go:130] >      netgo
	I1014 19:40:32.949560  437269 command_runner.go:130] >      osusergo
	I1014 19:40:32.949567  437269 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1014 19:40:32.949571  437269 command_runner.go:130] >      seccomp
	I1014 19:40:32.949576  437269 command_runner.go:130] >      apparmor
	I1014 19:40:32.949582  437269 command_runner.go:130] >      selinux
	I1014 19:40:32.949588  437269 command_runner.go:130] >    LDFlags:          unknown
	I1014 19:40:32.949592  437269 command_runner.go:130] >    SeccompEnabled:   true
	I1014 19:40:32.949599  437269 command_runner.go:130] >    AppArmorEnabled:  false
	I1014 19:40:32.952722  437269 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1014 19:40:32.953989  437269 cli_runner.go:164] Run: docker network inspect functional-744288 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1014 19:40:32.971672  437269 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1014 19:40:32.976098  437269 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1014 19:40:32.976178  437269 kubeadm.go:883] updating cluster {Name:functional-744288 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-744288 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 19:40:32.976267  437269 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 19:40:32.976332  437269 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 19:40:33.006155  437269 command_runner.go:130] > {
	I1014 19:40:33.006181  437269 command_runner.go:130] >   "images":  [
	I1014 19:40:33.006186  437269 command_runner.go:130] >     {
	I1014 19:40:33.006194  437269 command_runner.go:130] >       "id":  "409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c",
	I1014 19:40:33.006200  437269 command_runner.go:130] >       "repoTags":  [
	I1014 19:40:33.006209  437269 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1014 19:40:33.006213  437269 command_runner.go:130] >       ],
	I1014 19:40:33.006218  437269 command_runner.go:130] >       "repoDigests":  [
	I1014 19:40:33.006232  437269 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1014 19:40:33.006248  437269 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"
	I1014 19:40:33.006257  437269 command_runner.go:130] >       ],
	I1014 19:40:33.006270  437269 command_runner.go:130] >       "size":  "109379124",
	I1014 19:40:33.006276  437269 command_runner.go:130] >       "username":  "",
	I1014 19:40:33.006281  437269 command_runner.go:130] >       "pinned":  false
	I1014 19:40:33.006287  437269 command_runner.go:130] >     },
	I1014 19:40:33.006290  437269 command_runner.go:130] >     {
	I1014 19:40:33.006304  437269 command_runner.go:130] >       "id":  "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1014 19:40:33.006316  437269 command_runner.go:130] >       "repoTags":  [
	I1014 19:40:33.006324  437269 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1014 19:40:33.006330  437269 command_runner.go:130] >       ],
	I1014 19:40:33.006335  437269 command_runner.go:130] >       "repoDigests":  [
	I1014 19:40:33.006348  437269 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1014 19:40:33.006364  437269 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1014 19:40:33.006372  437269 command_runner.go:130] >       ],
	I1014 19:40:33.006379  437269 command_runner.go:130] >       "size":  "31470524",
	I1014 19:40:33.006388  437269 command_runner.go:130] >       "username":  "",
	I1014 19:40:33.006398  437269 command_runner.go:130] >       "pinned":  false
	I1014 19:40:33.006402  437269 command_runner.go:130] >     },
	I1014 19:40:33.006405  437269 command_runner.go:130] >     {
	I1014 19:40:33.006413  437269 command_runner.go:130] >       "id":  "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969",
	I1014 19:40:33.006422  437269 command_runner.go:130] >       "repoTags":  [
	I1014 19:40:33.006431  437269 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.12.1"
	I1014 19:40:33.006441  437269 command_runner.go:130] >       ],
	I1014 19:40:33.006448  437269 command_runner.go:130] >       "repoDigests":  [
	I1014 19:40:33.006463  437269 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998",
	I1014 19:40:33.006477  437269 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"
	I1014 19:40:33.006486  437269 command_runner.go:130] >       ],
	I1014 19:40:33.006496  437269 command_runner.go:130] >       "size":  "76103547",
	I1014 19:40:33.006505  437269 command_runner.go:130] >       "username":  "nonroot",
	I1014 19:40:33.006513  437269 command_runner.go:130] >       "pinned":  false
	I1014 19:40:33.006516  437269 command_runner.go:130] >     },
	I1014 19:40:33.006525  437269 command_runner.go:130] >     {
	I1014 19:40:33.006535  437269 command_runner.go:130] >       "id":  "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115",
	I1014 19:40:33.006545  437269 command_runner.go:130] >       "repoTags":  [
	I1014 19:40:33.006555  437269 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.4-0"
	I1014 19:40:33.006563  437269 command_runner.go:130] >       ],
	I1014 19:40:33.006570  437269 command_runner.go:130] >       "repoDigests":  [
	I1014 19:40:33.006584  437269 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f",
	I1014 19:40:33.006598  437269 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"
	I1014 19:40:33.006607  437269 command_runner.go:130] >       ],
	I1014 19:40:33.006615  437269 command_runner.go:130] >       "size":  "195976448",
	I1014 19:40:33.006619  437269 command_runner.go:130] >       "uid":  {
	I1014 19:40:33.006624  437269 command_runner.go:130] >         "value":  "0"
	I1014 19:40:33.006632  437269 command_runner.go:130] >       },
	I1014 19:40:33.006646  437269 command_runner.go:130] >       "username":  "",
	I1014 19:40:33.006657  437269 command_runner.go:130] >       "pinned":  false
	I1014 19:40:33.006667  437269 command_runner.go:130] >     },
	I1014 19:40:33.006675  437269 command_runner.go:130] >     {
	I1014 19:40:33.006689  437269 command_runner.go:130] >       "id":  "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97",
	I1014 19:40:33.006695  437269 command_runner.go:130] >       "repoTags":  [
	I1014 19:40:33.006707  437269 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.34.1"
	I1014 19:40:33.006714  437269 command_runner.go:130] >       ],
	I1014 19:40:33.006718  437269 command_runner.go:130] >       "repoDigests":  [
	I1014 19:40:33.006732  437269 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964",
	I1014 19:40:33.006748  437269 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"
	I1014 19:40:33.006767  437269 command_runner.go:130] >       ],
	I1014 19:40:33.006778  437269 command_runner.go:130] >       "size":  "89046001",
	I1014 19:40:33.006786  437269 command_runner.go:130] >       "uid":  {
	I1014 19:40:33.006795  437269 command_runner.go:130] >         "value":  "0"
	I1014 19:40:33.006803  437269 command_runner.go:130] >       },
	I1014 19:40:33.006809  437269 command_runner.go:130] >       "username":  "",
	I1014 19:40:33.006819  437269 command_runner.go:130] >       "pinned":  false
	I1014 19:40:33.006827  437269 command_runner.go:130] >     },
	I1014 19:40:33.006835  437269 command_runner.go:130] >     {
	I1014 19:40:33.006846  437269 command_runner.go:130] >       "id":  "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f",
	I1014 19:40:33.006855  437269 command_runner.go:130] >       "repoTags":  [
	I1014 19:40:33.006865  437269 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.34.1"
	I1014 19:40:33.006874  437269 command_runner.go:130] >       ],
	I1014 19:40:33.006884  437269 command_runner.go:130] >       "repoDigests":  [
	I1014 19:40:33.006899  437269 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89",
	I1014 19:40:33.006910  437269 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"
	I1014 19:40:33.006918  437269 command_runner.go:130] >       ],
	I1014 19:40:33.006926  437269 command_runner.go:130] >       "size":  "76004181",
	I1014 19:40:33.006935  437269 command_runner.go:130] >       "uid":  {
	I1014 19:40:33.006948  437269 command_runner.go:130] >         "value":  "0"
	I1014 19:40:33.006957  437269 command_runner.go:130] >       },
	I1014 19:40:33.006967  437269 command_runner.go:130] >       "username":  "",
	I1014 19:40:33.006976  437269 command_runner.go:130] >       "pinned":  false
	I1014 19:40:33.006985  437269 command_runner.go:130] >     },
	I1014 19:40:33.006993  437269 command_runner.go:130] >     {
	I1014 19:40:33.007004  437269 command_runner.go:130] >       "id":  "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7",
	I1014 19:40:33.007011  437269 command_runner.go:130] >       "repoTags":  [
	I1014 19:40:33.007019  437269 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.34.1"
	I1014 19:40:33.007027  437269 command_runner.go:130] >       ],
	I1014 19:40:33.007037  437269 command_runner.go:130] >       "repoDigests":  [
	I1014 19:40:33.007052  437269 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a",
	I1014 19:40:33.007067  437269 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"
	I1014 19:40:33.007076  437269 command_runner.go:130] >       ],
	I1014 19:40:33.007084  437269 command_runner.go:130] >       "size":  "73138073",
	I1014 19:40:33.007092  437269 command_runner.go:130] >       "username":  "",
	I1014 19:40:33.007095  437269 command_runner.go:130] >       "pinned":  false
	I1014 19:40:33.007103  437269 command_runner.go:130] >     },
	I1014 19:40:33.007109  437269 command_runner.go:130] >     {
	I1014 19:40:33.007123  437269 command_runner.go:130] >       "id":  "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813",
	I1014 19:40:33.007132  437269 command_runner.go:130] >       "repoTags":  [
	I1014 19:40:33.007142  437269 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.34.1"
	I1014 19:40:33.007152  437269 command_runner.go:130] >       ],
	I1014 19:40:33.007162  437269 command_runner.go:130] >       "repoDigests":  [
	I1014 19:40:33.007175  437269 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31",
	I1014 19:40:33.007194  437269 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"
	I1014 19:40:33.007203  437269 command_runner.go:130] >       ],
	I1014 19:40:33.007213  437269 command_runner.go:130] >       "size":  "53844823",
	I1014 19:40:33.007220  437269 command_runner.go:130] >       "uid":  {
	I1014 19:40:33.007229  437269 command_runner.go:130] >         "value":  "0"
	I1014 19:40:33.007237  437269 command_runner.go:130] >       },
	I1014 19:40:33.007246  437269 command_runner.go:130] >       "username":  "",
	I1014 19:40:33.007253  437269 command_runner.go:130] >       "pinned":  false
	I1014 19:40:33.007260  437269 command_runner.go:130] >     },
	I1014 19:40:33.007266  437269 command_runner.go:130] >     {
	I1014 19:40:33.007278  437269 command_runner.go:130] >       "id":  "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f",
	I1014 19:40:33.007285  437269 command_runner.go:130] >       "repoTags":  [
	I1014 19:40:33.007290  437269 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1014 19:40:33.007298  437269 command_runner.go:130] >       ],
	I1014 19:40:33.007308  437269 command_runner.go:130] >       "repoDigests":  [
	I1014 19:40:33.007320  437269 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1014 19:40:33.007334  437269 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"
	I1014 19:40:33.007342  437269 command_runner.go:130] >       ],
	I1014 19:40:33.007351  437269 command_runner.go:130] >       "size":  "742092",
	I1014 19:40:33.007359  437269 command_runner.go:130] >       "uid":  {
	I1014 19:40:33.007370  437269 command_runner.go:130] >         "value":  "65535"
	I1014 19:40:33.007376  437269 command_runner.go:130] >       },
	I1014 19:40:33.007380  437269 command_runner.go:130] >       "username":  "",
	I1014 19:40:33.007387  437269 command_runner.go:130] >       "pinned":  true
	I1014 19:40:33.007393  437269 command_runner.go:130] >     }
	I1014 19:40:33.007401  437269 command_runner.go:130] >   ]
	I1014 19:40:33.007406  437269 command_runner.go:130] > }
	I1014 19:40:33.007590  437269 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 19:40:33.007603  437269 crio.go:433] Images already preloaded, skipping extraction
	I1014 19:40:33.007661  437269 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 19:40:33.032442  437269 command_runner.go:130] > {
	I1014 19:40:33.032462  437269 command_runner.go:130] >   "images":  [
	I1014 19:40:33.032466  437269 command_runner.go:130] >     {
	I1014 19:40:33.032478  437269 command_runner.go:130] >       "id":  "409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c",
	I1014 19:40:33.032485  437269 command_runner.go:130] >       "repoTags":  [
	I1014 19:40:33.032495  437269 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1014 19:40:33.032501  437269 command_runner.go:130] >       ],
	I1014 19:40:33.032508  437269 command_runner.go:130] >       "repoDigests":  [
	I1014 19:40:33.032519  437269 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1014 19:40:33.032527  437269 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"
	I1014 19:40:33.032534  437269 command_runner.go:130] >       ],
	I1014 19:40:33.032538  437269 command_runner.go:130] >       "size":  "109379124",
	I1014 19:40:33.032542  437269 command_runner.go:130] >       "username":  "",
	I1014 19:40:33.032548  437269 command_runner.go:130] >       "pinned":  false
	I1014 19:40:33.032551  437269 command_runner.go:130] >     },
	I1014 19:40:33.032555  437269 command_runner.go:130] >     {
	I1014 19:40:33.032561  437269 command_runner.go:130] >       "id":  "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1014 19:40:33.032567  437269 command_runner.go:130] >       "repoTags":  [
	I1014 19:40:33.032572  437269 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1014 19:40:33.032575  437269 command_runner.go:130] >       ],
	I1014 19:40:33.032582  437269 command_runner.go:130] >       "repoDigests":  [
	I1014 19:40:33.032591  437269 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1014 19:40:33.032602  437269 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1014 19:40:33.032608  437269 command_runner.go:130] >       ],
	I1014 19:40:33.032612  437269 command_runner.go:130] >       "size":  "31470524",
	I1014 19:40:33.032616  437269 command_runner.go:130] >       "username":  "",
	I1014 19:40:33.032621  437269 command_runner.go:130] >       "pinned":  false
	I1014 19:40:33.032626  437269 command_runner.go:130] >     },
	I1014 19:40:33.032629  437269 command_runner.go:130] >     {
	I1014 19:40:33.032635  437269 command_runner.go:130] >       "id":  "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969",
	I1014 19:40:33.032642  437269 command_runner.go:130] >       "repoTags":  [
	I1014 19:40:33.032647  437269 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.12.1"
	I1014 19:40:33.032652  437269 command_runner.go:130] >       ],
	I1014 19:40:33.032656  437269 command_runner.go:130] >       "repoDigests":  [
	I1014 19:40:33.032665  437269 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998",
	I1014 19:40:33.032675  437269 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"
	I1014 19:40:33.032682  437269 command_runner.go:130] >       ],
	I1014 19:40:33.032686  437269 command_runner.go:130] >       "size":  "76103547",
	I1014 19:40:33.032690  437269 command_runner.go:130] >       "username":  "nonroot",
	I1014 19:40:33.032694  437269 command_runner.go:130] >       "pinned":  false
	I1014 19:40:33.032697  437269 command_runner.go:130] >     },
	I1014 19:40:33.032700  437269 command_runner.go:130] >     {
	I1014 19:40:33.032705  437269 command_runner.go:130] >       "id":  "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115",
	I1014 19:40:33.032709  437269 command_runner.go:130] >       "repoTags":  [
	I1014 19:40:33.032714  437269 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.4-0"
	I1014 19:40:33.032720  437269 command_runner.go:130] >       ],
	I1014 19:40:33.032724  437269 command_runner.go:130] >       "repoDigests":  [
	I1014 19:40:33.032730  437269 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f",
	I1014 19:40:33.032739  437269 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"
	I1014 19:40:33.032743  437269 command_runner.go:130] >       ],
	I1014 19:40:33.032749  437269 command_runner.go:130] >       "size":  "195976448",
	I1014 19:40:33.032772  437269 command_runner.go:130] >       "uid":  {
	I1014 19:40:33.032781  437269 command_runner.go:130] >         "value":  "0"
	I1014 19:40:33.032786  437269 command_runner.go:130] >       },
	I1014 19:40:33.032793  437269 command_runner.go:130] >       "username":  "",
	I1014 19:40:33.032798  437269 command_runner.go:130] >       "pinned":  false
	I1014 19:40:33.032801  437269 command_runner.go:130] >     },
	I1014 19:40:33.032804  437269 command_runner.go:130] >     {
	I1014 19:40:33.032810  437269 command_runner.go:130] >       "id":  "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97",
	I1014 19:40:33.032816  437269 command_runner.go:130] >       "repoTags":  [
	I1014 19:40:33.032821  437269 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.34.1"
	I1014 19:40:33.032827  437269 command_runner.go:130] >       ],
	I1014 19:40:33.032830  437269 command_runner.go:130] >       "repoDigests":  [
	I1014 19:40:33.032837  437269 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964",
	I1014 19:40:33.032847  437269 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"
	I1014 19:40:33.032850  437269 command_runner.go:130] >       ],
	I1014 19:40:33.032858  437269 command_runner.go:130] >       "size":  "89046001",
	I1014 19:40:33.032862  437269 command_runner.go:130] >       "uid":  {
	I1014 19:40:33.032866  437269 command_runner.go:130] >         "value":  "0"
	I1014 19:40:33.032869  437269 command_runner.go:130] >       },
	I1014 19:40:33.032873  437269 command_runner.go:130] >       "username":  "",
	I1014 19:40:33.032877  437269 command_runner.go:130] >       "pinned":  false
	I1014 19:40:33.032880  437269 command_runner.go:130] >     },
	I1014 19:40:33.032883  437269 command_runner.go:130] >     {
	I1014 19:40:33.032889  437269 command_runner.go:130] >       "id":  "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f",
	I1014 19:40:33.032895  437269 command_runner.go:130] >       "repoTags":  [
	I1014 19:40:33.032901  437269 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.34.1"
	I1014 19:40:33.032906  437269 command_runner.go:130] >       ],
	I1014 19:40:33.032910  437269 command_runner.go:130] >       "repoDigests":  [
	I1014 19:40:33.032917  437269 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89",
	I1014 19:40:33.032935  437269 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"
	I1014 19:40:33.032940  437269 command_runner.go:130] >       ],
	I1014 19:40:33.032944  437269 command_runner.go:130] >       "size":  "76004181",
	I1014 19:40:33.032948  437269 command_runner.go:130] >       "uid":  {
	I1014 19:40:33.032955  437269 command_runner.go:130] >         "value":  "0"
	I1014 19:40:33.032958  437269 command_runner.go:130] >       },
	I1014 19:40:33.032963  437269 command_runner.go:130] >       "username":  "",
	I1014 19:40:33.032969  437269 command_runner.go:130] >       "pinned":  false
	I1014 19:40:33.032973  437269 command_runner.go:130] >     },
	I1014 19:40:33.032976  437269 command_runner.go:130] >     {
	I1014 19:40:33.032981  437269 command_runner.go:130] >       "id":  "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7",
	I1014 19:40:33.032986  437269 command_runner.go:130] >       "repoTags":  [
	I1014 19:40:33.032990  437269 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.34.1"
	I1014 19:40:33.032996  437269 command_runner.go:130] >       ],
	I1014 19:40:33.033000  437269 command_runner.go:130] >       "repoDigests":  [
	I1014 19:40:33.033009  437269 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a",
	I1014 19:40:33.033018  437269 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"
	I1014 19:40:33.033023  437269 command_runner.go:130] >       ],
	I1014 19:40:33.033027  437269 command_runner.go:130] >       "size":  "73138073",
	I1014 19:40:33.033033  437269 command_runner.go:130] >       "username":  "",
	I1014 19:40:33.033037  437269 command_runner.go:130] >       "pinned":  false
	I1014 19:40:33.033042  437269 command_runner.go:130] >     },
	I1014 19:40:33.033045  437269 command_runner.go:130] >     {
	I1014 19:40:33.033051  437269 command_runner.go:130] >       "id":  "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813",
	I1014 19:40:33.033055  437269 command_runner.go:130] >       "repoTags":  [
	I1014 19:40:33.033059  437269 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.34.1"
	I1014 19:40:33.033062  437269 command_runner.go:130] >       ],
	I1014 19:40:33.033066  437269 command_runner.go:130] >       "repoDigests":  [
	I1014 19:40:33.033073  437269 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31",
	I1014 19:40:33.033115  437269 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"
	I1014 19:40:33.033125  437269 command_runner.go:130] >       ],
	I1014 19:40:33.033129  437269 command_runner.go:130] >       "size":  "53844823",
	I1014 19:40:33.033133  437269 command_runner.go:130] >       "uid":  {
	I1014 19:40:33.033139  437269 command_runner.go:130] >         "value":  "0"
	I1014 19:40:33.033142  437269 command_runner.go:130] >       },
	I1014 19:40:33.033146  437269 command_runner.go:130] >       "username":  "",
	I1014 19:40:33.033150  437269 command_runner.go:130] >       "pinned":  false
	I1014 19:40:33.033153  437269 command_runner.go:130] >     },
	I1014 19:40:33.033157  437269 command_runner.go:130] >     {
	I1014 19:40:33.033166  437269 command_runner.go:130] >       "id":  "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f",
	I1014 19:40:33.033170  437269 command_runner.go:130] >       "repoTags":  [
	I1014 19:40:33.033175  437269 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1014 19:40:33.033180  437269 command_runner.go:130] >       ],
	I1014 19:40:33.033184  437269 command_runner.go:130] >       "repoDigests":  [
	I1014 19:40:33.033194  437269 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1014 19:40:33.033201  437269 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"
	I1014 19:40:33.033207  437269 command_runner.go:130] >       ],
	I1014 19:40:33.033210  437269 command_runner.go:130] >       "size":  "742092",
	I1014 19:40:33.033214  437269 command_runner.go:130] >       "uid":  {
	I1014 19:40:33.033217  437269 command_runner.go:130] >         "value":  "65535"
	I1014 19:40:33.033221  437269 command_runner.go:130] >       },
	I1014 19:40:33.033227  437269 command_runner.go:130] >       "username":  "",
	I1014 19:40:33.033231  437269 command_runner.go:130] >       "pinned":  true
	I1014 19:40:33.033234  437269 command_runner.go:130] >     }
	I1014 19:40:33.033237  437269 command_runner.go:130] >   ]
	I1014 19:40:33.033243  437269 command_runner.go:130] > }
	I1014 19:40:33.033339  437269 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 19:40:33.033350  437269 cache_images.go:85] Images are preloaded, skipping loading
	I1014 19:40:33.033357  437269 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1014 19:40:33.033466  437269 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-744288 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-744288 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 19:40:33.033525  437269 ssh_runner.go:195] Run: crio config
	I1014 19:40:33.060289  437269 command_runner.go:130] ! time="2025-10-14T19:40:33.059904069Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1014 19:40:33.060322  437269 command_runner.go:130] ! time="2025-10-14T19:40:33.059934761Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1014 19:40:33.060333  437269 command_runner.go:130] ! time="2025-10-14T19:40:33.05995717Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1014 19:40:33.060344  437269 command_runner.go:130] ! time="2025-10-14T19:40:33.059977069Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1014 19:40:33.060356  437269 command_runner.go:130] ! time="2025-10-14T19:40:33.060036887Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1014 19:40:33.060415  437269 command_runner.go:130] ! time="2025-10-14T19:40:33.060204237Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1014 19:40:33.072518  437269 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1014 19:40:33.078451  437269 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1014 19:40:33.078471  437269 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1014 19:40:33.078478  437269 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1014 19:40:33.078485  437269 command_runner.go:130] > #
	I1014 19:40:33.078491  437269 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1014 19:40:33.078497  437269 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1014 19:40:33.078504  437269 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1014 19:40:33.078513  437269 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1014 19:40:33.078518  437269 command_runner.go:130] > # reload'.
	I1014 19:40:33.078524  437269 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1014 19:40:33.078533  437269 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1014 19:40:33.078539  437269 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1014 19:40:33.078545  437269 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1014 19:40:33.078551  437269 command_runner.go:130] > [crio]
	I1014 19:40:33.078557  437269 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1014 19:40:33.078564  437269 command_runner.go:130] > # containers images, in this directory.
	I1014 19:40:33.078572  437269 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1014 19:40:33.078580  437269 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1014 19:40:33.078585  437269 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1014 19:40:33.078594  437269 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1014 19:40:33.078601  437269 command_runner.go:130] > # imagestore = ""
	I1014 19:40:33.078607  437269 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1014 19:40:33.078615  437269 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1014 19:40:33.078620  437269 command_runner.go:130] > # storage_driver = "overlay"
	I1014 19:40:33.078625  437269 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1014 19:40:33.078633  437269 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1014 19:40:33.078637  437269 command_runner.go:130] > # storage_option = [
	I1014 19:40:33.078642  437269 command_runner.go:130] > # ]
	I1014 19:40:33.078648  437269 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1014 19:40:33.078656  437269 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1014 19:40:33.078660  437269 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1014 19:40:33.078667  437269 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1014 19:40:33.078673  437269 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1014 19:40:33.078690  437269 command_runner.go:130] > # always happen on a node reboot
	I1014 19:40:33.078695  437269 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1014 19:40:33.078703  437269 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1014 19:40:33.078709  437269 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1014 19:40:33.078716  437269 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1014 19:40:33.078720  437269 command_runner.go:130] > # version_file_persist = ""
	I1014 19:40:33.078729  437269 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1014 19:40:33.078739  437269 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1014 19:40:33.078745  437269 command_runner.go:130] > # internal_wipe = true
	I1014 19:40:33.078771  437269 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1014 19:40:33.078784  437269 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1014 19:40:33.078790  437269 command_runner.go:130] > # internal_repair = true
	I1014 19:40:33.078798  437269 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1014 19:40:33.078804  437269 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1014 19:40:33.078816  437269 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1014 19:40:33.078823  437269 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1014 19:40:33.078829  437269 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1014 19:40:33.078834  437269 command_runner.go:130] > [crio.api]
	I1014 19:40:33.078839  437269 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1014 19:40:33.078846  437269 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1014 19:40:33.078851  437269 command_runner.go:130] > # IP address on which the stream server will listen.
	I1014 19:40:33.078858  437269 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1014 19:40:33.078864  437269 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1014 19:40:33.078871  437269 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1014 19:40:33.078875  437269 command_runner.go:130] > # stream_port = "0"
	I1014 19:40:33.078881  437269 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1014 19:40:33.078885  437269 command_runner.go:130] > # stream_enable_tls = false
	I1014 19:40:33.078893  437269 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1014 19:40:33.078897  437269 command_runner.go:130] > # stream_idle_timeout = ""
	I1014 19:40:33.078904  437269 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1014 19:40:33.078912  437269 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1014 19:40:33.078916  437269 command_runner.go:130] > # stream_tls_cert = ""
	I1014 19:40:33.078924  437269 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1014 19:40:33.078931  437269 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1014 19:40:33.078936  437269 command_runner.go:130] > # stream_tls_key = ""
	I1014 19:40:33.078941  437269 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1014 19:40:33.078949  437269 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1014 19:40:33.078954  437269 command_runner.go:130] > # automatically pick up the changes.
	I1014 19:40:33.078960  437269 command_runner.go:130] > # stream_tls_ca = ""
	I1014 19:40:33.078977  437269 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1014 19:40:33.078984  437269 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1014 19:40:33.078991  437269 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1014 19:40:33.078998  437269 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1014 19:40:33.079004  437269 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1014 19:40:33.079011  437269 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1014 19:40:33.079015  437269 command_runner.go:130] > [crio.runtime]
	I1014 19:40:33.079021  437269 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1014 19:40:33.079028  437269 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1014 19:40:33.079032  437269 command_runner.go:130] > # "nofile=1024:2048"
	I1014 19:40:33.079040  437269 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1014 19:40:33.079046  437269 command_runner.go:130] > # default_ulimits = [
	I1014 19:40:33.079049  437269 command_runner.go:130] > # ]
	I1014 19:40:33.079054  437269 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1014 19:40:33.079060  437269 command_runner.go:130] > # no_pivot = false
	I1014 19:40:33.079065  437269 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1014 19:40:33.079073  437269 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1014 19:40:33.079078  437269 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1014 19:40:33.079086  437269 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1014 19:40:33.079090  437269 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1014 19:40:33.079099  437269 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1014 19:40:33.079105  437269 command_runner.go:130] > # conmon = ""
	I1014 19:40:33.079109  437269 command_runner.go:130] > # Cgroup setting for conmon
	I1014 19:40:33.079117  437269 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1014 19:40:33.079123  437269 command_runner.go:130] > conmon_cgroup = "pod"
	I1014 19:40:33.079129  437269 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1014 19:40:33.079136  437269 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1014 19:40:33.079142  437269 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1014 19:40:33.079147  437269 command_runner.go:130] > # conmon_env = [
	I1014 19:40:33.079150  437269 command_runner.go:130] > # ]
	I1014 19:40:33.079155  437269 command_runner.go:130] > # Additional environment variables to set for all the
	I1014 19:40:33.079163  437269 command_runner.go:130] > # containers. These are overridden if set in the
	I1014 19:40:33.079169  437269 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1014 19:40:33.079175  437269 command_runner.go:130] > # default_env = [
	I1014 19:40:33.079177  437269 command_runner.go:130] > # ]
	I1014 19:40:33.079183  437269 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1014 19:40:33.079192  437269 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1014 19:40:33.079198  437269 command_runner.go:130] > # selinux = false
	I1014 19:40:33.079204  437269 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1014 19:40:33.079210  437269 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1014 19:40:33.079219  437269 command_runner.go:130] > # This option supports live configuration reload.
	I1014 19:40:33.079225  437269 command_runner.go:130] > # seccomp_profile = ""
	I1014 19:40:33.079231  437269 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1014 19:40:33.079237  437269 command_runner.go:130] > # This option supports live configuration reload.
	I1014 19:40:33.079242  437269 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1014 19:40:33.079250  437269 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1014 19:40:33.079258  437269 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1014 19:40:33.079264  437269 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1014 19:40:33.079273  437269 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1014 19:40:33.079279  437269 command_runner.go:130] > # This option supports live configuration reload.
	I1014 19:40:33.079284  437269 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1014 19:40:33.079291  437269 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1014 19:40:33.079295  437269 command_runner.go:130] > # the cgroup blockio controller.
	I1014 19:40:33.079301  437269 command_runner.go:130] > # blockio_config_file = ""
	I1014 19:40:33.079308  437269 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1014 19:40:33.079314  437269 command_runner.go:130] > # blockio parameters.
	I1014 19:40:33.079317  437269 command_runner.go:130] > # blockio_reload = false
	I1014 19:40:33.079325  437269 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1014 19:40:33.079329  437269 command_runner.go:130] > # irqbalance daemon.
	I1014 19:40:33.079336  437269 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1014 19:40:33.079342  437269 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1014 19:40:33.079351  437269 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1014 19:40:33.079360  437269 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1014 19:40:33.079367  437269 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1014 19:40:33.079374  437269 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1014 19:40:33.079380  437269 command_runner.go:130] > # This option supports live configuration reload.
	I1014 19:40:33.079385  437269 command_runner.go:130] > # rdt_config_file = ""
	I1014 19:40:33.079393  437269 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1014 19:40:33.079396  437269 command_runner.go:130] > # cgroup_manager = "systemd"
	I1014 19:40:33.079402  437269 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1014 19:40:33.079407  437269 command_runner.go:130] > # separate_pull_cgroup = ""
	I1014 19:40:33.079413  437269 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1014 19:40:33.079421  437269 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1014 19:40:33.079427  437269 command_runner.go:130] > # will be added.
	I1014 19:40:33.079430  437269 command_runner.go:130] > # default_capabilities = [
	I1014 19:40:33.079433  437269 command_runner.go:130] > # 	"CHOWN",
	I1014 19:40:33.079439  437269 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1014 19:40:33.079442  437269 command_runner.go:130] > # 	"FSETID",
	I1014 19:40:33.079445  437269 command_runner.go:130] > # 	"FOWNER",
	I1014 19:40:33.079451  437269 command_runner.go:130] > # 	"SETGID",
	I1014 19:40:33.079466  437269 command_runner.go:130] > # 	"SETUID",
	I1014 19:40:33.079472  437269 command_runner.go:130] > # 	"SETPCAP",
	I1014 19:40:33.079475  437269 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1014 19:40:33.079480  437269 command_runner.go:130] > # 	"KILL",
	I1014 19:40:33.079484  437269 command_runner.go:130] > # ]
	I1014 19:40:33.079493  437269 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1014 19:40:33.079501  437269 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1014 19:40:33.079508  437269 command_runner.go:130] > # add_inheritable_capabilities = false
	I1014 19:40:33.079514  437269 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1014 19:40:33.079522  437269 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1014 19:40:33.079526  437269 command_runner.go:130] > default_sysctls = [
	I1014 19:40:33.079530  437269 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1014 19:40:33.079536  437269 command_runner.go:130] > ]
	I1014 19:40:33.079540  437269 command_runner.go:130] > # List of devices on the host that a
	I1014 19:40:33.079548  437269 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1014 19:40:33.079553  437269 command_runner.go:130] > # allowed_devices = [
	I1014 19:40:33.079557  437269 command_runner.go:130] > # 	"/dev/fuse",
	I1014 19:40:33.079563  437269 command_runner.go:130] > # 	"/dev/net/tun",
	I1014 19:40:33.079566  437269 command_runner.go:130] > # ]
	I1014 19:40:33.079574  437269 command_runner.go:130] > # List of additional devices. specified as
	I1014 19:40:33.079581  437269 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1014 19:40:33.079588  437269 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1014 19:40:33.079595  437269 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1014 19:40:33.079601  437269 command_runner.go:130] > # additional_devices = [
	I1014 19:40:33.079604  437269 command_runner.go:130] > # ]
	I1014 19:40:33.079611  437269 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1014 19:40:33.079615  437269 command_runner.go:130] > # cdi_spec_dirs = [
	I1014 19:40:33.079619  437269 command_runner.go:130] > # 	"/etc/cdi",
	I1014 19:40:33.079625  437269 command_runner.go:130] > # 	"/var/run/cdi",
	I1014 19:40:33.079628  437269 command_runner.go:130] > # ]
	I1014 19:40:33.079633  437269 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1014 19:40:33.079641  437269 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1014 19:40:33.079645  437269 command_runner.go:130] > # Defaults to false.
	I1014 19:40:33.079652  437269 command_runner.go:130] > # device_ownership_from_security_context = false
	I1014 19:40:33.079659  437269 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1014 19:40:33.079666  437269 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1014 19:40:33.079670  437269 command_runner.go:130] > # hooks_dir = [
	I1014 19:40:33.079682  437269 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1014 19:40:33.079687  437269 command_runner.go:130] > # ]
	I1014 19:40:33.079693  437269 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1014 19:40:33.079701  437269 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1014 19:40:33.079706  437269 command_runner.go:130] > # its default mounts from the following two files:
	I1014 19:40:33.079712  437269 command_runner.go:130] > #
	I1014 19:40:33.079718  437269 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1014 19:40:33.079726  437269 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1014 19:40:33.079734  437269 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1014 19:40:33.079737  437269 command_runner.go:130] > #
	I1014 19:40:33.079743  437269 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1014 19:40:33.079751  437269 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1014 19:40:33.079780  437269 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1014 19:40:33.079788  437269 command_runner.go:130] > #      only add mounts it finds in this file.
	I1014 19:40:33.079791  437269 command_runner.go:130] > #
	I1014 19:40:33.079797  437269 command_runner.go:130] > # default_mounts_file = ""
	I1014 19:40:33.079804  437269 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1014 19:40:33.079811  437269 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1014 19:40:33.079816  437269 command_runner.go:130] > # pids_limit = -1
	I1014 19:40:33.079822  437269 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1014 19:40:33.079830  437269 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1014 19:40:33.079839  437269 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1014 19:40:33.079846  437269 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1014 19:40:33.079852  437269 command_runner.go:130] > # log_size_max = -1
	I1014 19:40:33.079858  437269 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1014 19:40:33.079864  437269 command_runner.go:130] > # log_to_journald = false
	I1014 19:40:33.079870  437269 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1014 19:40:33.079878  437269 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1014 19:40:33.079883  437269 command_runner.go:130] > # Path to directory for container attach sockets.
	I1014 19:40:33.079890  437269 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1014 19:40:33.079895  437269 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1014 19:40:33.079901  437269 command_runner.go:130] > # bind_mount_prefix = ""
	I1014 19:40:33.079906  437269 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1014 19:40:33.079912  437269 command_runner.go:130] > # read_only = false
	I1014 19:40:33.079917  437269 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1014 19:40:33.079926  437269 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1014 19:40:33.079933  437269 command_runner.go:130] > # live configuration reload.
	I1014 19:40:33.079937  437269 command_runner.go:130] > # log_level = "info"
	I1014 19:40:33.079942  437269 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1014 19:40:33.079950  437269 command_runner.go:130] > # This option supports live configuration reload.
	I1014 19:40:33.079953  437269 command_runner.go:130] > # log_filter = ""
	I1014 19:40:33.079959  437269 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1014 19:40:33.079967  437269 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1014 19:40:33.079970  437269 command_runner.go:130] > # separated by comma.
	I1014 19:40:33.079978  437269 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1014 19:40:33.079983  437269 command_runner.go:130] > # uid_mappings = ""
	I1014 19:40:33.079989  437269 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1014 19:40:33.079997  437269 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1014 19:40:33.080005  437269 command_runner.go:130] > # separated by comma.
	I1014 19:40:33.080014  437269 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1014 19:40:33.080020  437269 command_runner.go:130] > # gid_mappings = ""
	I1014 19:40:33.080026  437269 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1014 19:40:33.080035  437269 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1014 19:40:33.080043  437269 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1014 19:40:33.080049  437269 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1014 19:40:33.080055  437269 command_runner.go:130] > # minimum_mappable_uid = -1
	I1014 19:40:33.080061  437269 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1014 19:40:33.080069  437269 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1014 19:40:33.080075  437269 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1014 19:40:33.080085  437269 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1014 19:40:33.080090  437269 command_runner.go:130] > # minimum_mappable_gid = -1
	I1014 19:40:33.080096  437269 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1014 19:40:33.080112  437269 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1014 19:40:33.080120  437269 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1014 19:40:33.080124  437269 command_runner.go:130] > # ctr_stop_timeout = 30
	I1014 19:40:33.080131  437269 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1014 19:40:33.080138  437269 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1014 19:40:33.080144  437269 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1014 19:40:33.080149  437269 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1014 19:40:33.080155  437269 command_runner.go:130] > # drop_infra_ctr = true
	I1014 19:40:33.080160  437269 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1014 19:40:33.080168  437269 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1014 19:40:33.080175  437269 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1014 19:40:33.080181  437269 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1014 19:40:33.080188  437269 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1014 19:40:33.080195  437269 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1014 19:40:33.080200  437269 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1014 19:40:33.080207  437269 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1014 19:40:33.080211  437269 command_runner.go:130] > # shared_cpuset = ""
	I1014 19:40:33.080219  437269 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1014 19:40:33.080223  437269 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1014 19:40:33.080230  437269 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1014 19:40:33.080237  437269 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1014 19:40:33.080243  437269 command_runner.go:130] > # pinns_path = ""
	I1014 19:40:33.080249  437269 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1014 19:40:33.080256  437269 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1014 19:40:33.080261  437269 command_runner.go:130] > # enable_criu_support = true
	I1014 19:40:33.080268  437269 command_runner.go:130] > # Enable/disable the generation of the container,
	I1014 19:40:33.080273  437269 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1014 19:40:33.080280  437269 command_runner.go:130] > # enable_pod_events = false
	I1014 19:40:33.080285  437269 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1014 19:40:33.080292  437269 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1014 19:40:33.080296  437269 command_runner.go:130] > # default_runtime = "crun"
	I1014 19:40:33.080301  437269 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1014 19:40:33.080310  437269 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1014 19:40:33.080320  437269 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1014 19:40:33.080325  437269 command_runner.go:130] > # creation as a file is not desired either.
	I1014 19:40:33.080336  437269 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1014 19:40:33.080342  437269 command_runner.go:130] > # the hostname is being managed dynamically.
	I1014 19:40:33.080346  437269 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1014 19:40:33.080352  437269 command_runner.go:130] > # ]
	I1014 19:40:33.080357  437269 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1014 19:40:33.080365  437269 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1014 19:40:33.080373  437269 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1014 19:40:33.080378  437269 command_runner.go:130] > # Each entry in the table should follow the format:
	I1014 19:40:33.080382  437269 command_runner.go:130] > #
	I1014 19:40:33.080387  437269 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1014 19:40:33.080394  437269 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1014 19:40:33.080397  437269 command_runner.go:130] > # runtime_type = "oci"
	I1014 19:40:33.080404  437269 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1014 19:40:33.080408  437269 command_runner.go:130] > # inherit_default_runtime = false
	I1014 19:40:33.080413  437269 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1014 19:40:33.080419  437269 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1014 19:40:33.080424  437269 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1014 19:40:33.080430  437269 command_runner.go:130] > # monitor_env = []
	I1014 19:40:33.080435  437269 command_runner.go:130] > # privileged_without_host_devices = false
	I1014 19:40:33.080440  437269 command_runner.go:130] > # allowed_annotations = []
	I1014 19:40:33.080445  437269 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1014 19:40:33.080451  437269 command_runner.go:130] > # no_sync_log = false
	I1014 19:40:33.080455  437269 command_runner.go:130] > # default_annotations = {}
	I1014 19:40:33.080461  437269 command_runner.go:130] > # stream_websockets = false
	I1014 19:40:33.080465  437269 command_runner.go:130] > # seccomp_profile = ""
	I1014 19:40:33.080487  437269 command_runner.go:130] > # Where:
	I1014 19:40:33.080494  437269 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1014 19:40:33.080500  437269 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1014 19:40:33.080508  437269 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1014 19:40:33.080514  437269 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1014 19:40:33.080519  437269 command_runner.go:130] > #   in $PATH.
	I1014 19:40:33.080525  437269 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1014 19:40:33.080532  437269 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1014 19:40:33.080538  437269 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1014 19:40:33.080543  437269 command_runner.go:130] > #   state.
	I1014 19:40:33.080552  437269 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1014 19:40:33.080560  437269 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1014 19:40:33.080565  437269 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1014 19:40:33.080573  437269 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1014 19:40:33.080578  437269 command_runner.go:130] > #   the values from the default runtime on load time.
	I1014 19:40:33.080586  437269 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1014 19:40:33.080591  437269 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1014 19:40:33.080599  437269 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1014 19:40:33.080605  437269 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1014 19:40:33.080612  437269 command_runner.go:130] > #   The currently recognized values are:
	I1014 19:40:33.080618  437269 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1014 19:40:33.080627  437269 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1014 19:40:33.080636  437269 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1014 19:40:33.080641  437269 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1014 19:40:33.080651  437269 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1014 19:40:33.080660  437269 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1014 19:40:33.080669  437269 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1014 19:40:33.080680  437269 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1014 19:40:33.080687  437269 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1014 19:40:33.080693  437269 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1014 19:40:33.080702  437269 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1014 19:40:33.080710  437269 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1014 19:40:33.080715  437269 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1014 19:40:33.080724  437269 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1014 19:40:33.080732  437269 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1014 19:40:33.080738  437269 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1014 19:40:33.080747  437269 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1014 19:40:33.080751  437269 command_runner.go:130] > #   deprecated option "conmon".
	I1014 19:40:33.080773  437269 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1014 19:40:33.080783  437269 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1014 19:40:33.080796  437269 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1014 19:40:33.080803  437269 command_runner.go:130] > #   should be moved to the container's cgroup
	I1014 19:40:33.080810  437269 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1014 19:40:33.080817  437269 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1014 19:40:33.080824  437269 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1014 19:40:33.080830  437269 command_runner.go:130] > #   conmon-rs by using:
	I1014 19:40:33.080837  437269 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1014 19:40:33.080847  437269 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1014 19:40:33.080857  437269 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1014 19:40:33.080865  437269 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1014 19:40:33.080872  437269 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1014 19:40:33.080879  437269 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1014 19:40:33.080888  437269 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1014 19:40:33.080894  437269 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1014 19:40:33.080904  437269 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1014 19:40:33.080915  437269 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1014 19:40:33.080921  437269 command_runner.go:130] > #   when a machine crash happens.
	I1014 19:40:33.080929  437269 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1014 19:40:33.080939  437269 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1014 19:40:33.080949  437269 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1014 19:40:33.080955  437269 command_runner.go:130] > #   seccomp profile for the runtime.
	I1014 19:40:33.080961  437269 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1014 19:40:33.080970  437269 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1014 19:40:33.080975  437269 command_runner.go:130] > #
	I1014 19:40:33.080980  437269 command_runner.go:130] > # Using the seccomp notifier feature:
	I1014 19:40:33.080985  437269 command_runner.go:130] > #
	I1014 19:40:33.080991  437269 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1014 19:40:33.080998  437269 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1014 19:40:33.081002  437269 command_runner.go:130] > #
	I1014 19:40:33.081007  437269 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1014 19:40:33.081015  437269 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1014 19:40:33.081020  437269 command_runner.go:130] > #
	I1014 19:40:33.081026  437269 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1014 19:40:33.081032  437269 command_runner.go:130] > # feature.
	I1014 19:40:33.081035  437269 command_runner.go:130] > #
	I1014 19:40:33.081042  437269 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1014 19:40:33.081048  437269 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1014 19:40:33.081057  437269 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1014 19:40:33.081062  437269 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1014 19:40:33.081070  437269 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1014 19:40:33.081073  437269 command_runner.go:130] > #
	I1014 19:40:33.081079  437269 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1014 19:40:33.081087  437269 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1014 19:40:33.081090  437269 command_runner.go:130] > #
	I1014 19:40:33.081096  437269 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1014 19:40:33.081103  437269 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1014 19:40:33.081106  437269 command_runner.go:130] > #
	I1014 19:40:33.081112  437269 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1014 19:40:33.081119  437269 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1014 19:40:33.081122  437269 command_runner.go:130] > # limitation.
	I1014 19:40:33.081129  437269 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1014 19:40:33.081138  437269 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1014 19:40:33.081143  437269 command_runner.go:130] > runtime_type = ""
	I1014 19:40:33.081147  437269 command_runner.go:130] > runtime_root = "/run/crun"
	I1014 19:40:33.081151  437269 command_runner.go:130] > inherit_default_runtime = false
	I1014 19:40:33.081157  437269 command_runner.go:130] > runtime_config_path = ""
	I1014 19:40:33.081161  437269 command_runner.go:130] > container_min_memory = ""
	I1014 19:40:33.081167  437269 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1014 19:40:33.081171  437269 command_runner.go:130] > monitor_cgroup = "pod"
	I1014 19:40:33.081177  437269 command_runner.go:130] > monitor_exec_cgroup = ""
	I1014 19:40:33.081181  437269 command_runner.go:130] > allowed_annotations = [
	I1014 19:40:33.081187  437269 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1014 19:40:33.081190  437269 command_runner.go:130] > ]
	I1014 19:40:33.081197  437269 command_runner.go:130] > privileged_without_host_devices = false
	I1014 19:40:33.081201  437269 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1014 19:40:33.081208  437269 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1014 19:40:33.081212  437269 command_runner.go:130] > runtime_type = ""
	I1014 19:40:33.081218  437269 command_runner.go:130] > runtime_root = "/run/runc"
	I1014 19:40:33.081222  437269 command_runner.go:130] > inherit_default_runtime = false
	I1014 19:40:33.081229  437269 command_runner.go:130] > runtime_config_path = ""
	I1014 19:40:33.081234  437269 command_runner.go:130] > container_min_memory = ""
	I1014 19:40:33.081241  437269 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1014 19:40:33.081245  437269 command_runner.go:130] > monitor_cgroup = "pod"
	I1014 19:40:33.081251  437269 command_runner.go:130] > monitor_exec_cgroup = ""
	I1014 19:40:33.081256  437269 command_runner.go:130] > privileged_without_host_devices = false
	I1014 19:40:33.081264  437269 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1014 19:40:33.081271  437269 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1014 19:40:33.081277  437269 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1014 19:40:33.081286  437269 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1014 19:40:33.081298  437269 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1014 19:40:33.081309  437269 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1014 19:40:33.081318  437269 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1014 19:40:33.081324  437269 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1014 19:40:33.081335  437269 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1014 19:40:33.081345  437269 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1014 19:40:33.081353  437269 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1014 19:40:33.081359  437269 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1014 19:40:33.081365  437269 command_runner.go:130] > # Example:
	I1014 19:40:33.081369  437269 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1014 19:40:33.081375  437269 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1014 19:40:33.081380  437269 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1014 19:40:33.081389  437269 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1014 19:40:33.081395  437269 command_runner.go:130] > # cpuset = "0-1"
	I1014 19:40:33.081399  437269 command_runner.go:130] > # cpushares = "5"
	I1014 19:40:33.081405  437269 command_runner.go:130] > # cpuquota = "1000"
	I1014 19:40:33.081408  437269 command_runner.go:130] > # cpuperiod = "100000"
	I1014 19:40:33.081412  437269 command_runner.go:130] > # cpulimit = "35"
	I1014 19:40:33.081417  437269 command_runner.go:130] > # Where:
	I1014 19:40:33.081421  437269 command_runner.go:130] > # The workload name is workload-type.
	I1014 19:40:33.081430  437269 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1014 19:40:33.081438  437269 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1014 19:40:33.081443  437269 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1014 19:40:33.081453  437269 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1014 19:40:33.081470  437269 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1014 19:40:33.081477  437269 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1014 19:40:33.081484  437269 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1014 19:40:33.081490  437269 command_runner.go:130] > # Default value is set to true
	I1014 19:40:33.081494  437269 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1014 19:40:33.081499  437269 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1014 19:40:33.081505  437269 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1014 19:40:33.081510  437269 command_runner.go:130] > # Default value is set to 'false'
	I1014 19:40:33.081516  437269 command_runner.go:130] > # disable_hostport_mapping = false
	I1014 19:40:33.081522  437269 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1014 19:40:33.081531  437269 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1014 19:40:33.081537  437269 command_runner.go:130] > # timezone = ""
	I1014 19:40:33.081543  437269 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1014 19:40:33.081549  437269 command_runner.go:130] > #
	I1014 19:40:33.081555  437269 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1014 19:40:33.081563  437269 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1014 19:40:33.081567  437269 command_runner.go:130] > [crio.image]
	I1014 19:40:33.081575  437269 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1014 19:40:33.081579  437269 command_runner.go:130] > # default_transport = "docker://"
	I1014 19:40:33.081585  437269 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1014 19:40:33.081593  437269 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1014 19:40:33.081597  437269 command_runner.go:130] > # global_auth_file = ""
	I1014 19:40:33.081604  437269 command_runner.go:130] > # The image used to instantiate infra containers.
	I1014 19:40:33.081609  437269 command_runner.go:130] > # This option supports live configuration reload.
	I1014 19:40:33.081616  437269 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1014 19:40:33.081622  437269 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1014 19:40:33.081630  437269 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1014 19:40:33.081634  437269 command_runner.go:130] > # This option supports live configuration reload.
	I1014 19:40:33.081639  437269 command_runner.go:130] > # pause_image_auth_file = ""
	I1014 19:40:33.081645  437269 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1014 19:40:33.081653  437269 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1014 19:40:33.081658  437269 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1014 19:40:33.081666  437269 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1014 19:40:33.081671  437269 command_runner.go:130] > # pause_command = "/pause"
	I1014 19:40:33.081682  437269 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1014 19:40:33.081690  437269 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1014 19:40:33.081695  437269 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1014 19:40:33.081703  437269 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1014 19:40:33.081709  437269 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1014 19:40:33.081717  437269 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1014 19:40:33.081723  437269 command_runner.go:130] > # pinned_images = [
	I1014 19:40:33.081725  437269 command_runner.go:130] > # ]
	I1014 19:40:33.081731  437269 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1014 19:40:33.081739  437269 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1014 19:40:33.081745  437269 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1014 19:40:33.081762  437269 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1014 19:40:33.081774  437269 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1014 19:40:33.081781  437269 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1014 19:40:33.081789  437269 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1014 19:40:33.081795  437269 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1014 19:40:33.081804  437269 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1014 19:40:33.081813  437269 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1014 19:40:33.081822  437269 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1014 19:40:33.081833  437269 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1014 19:40:33.081841  437269 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1014 19:40:33.081847  437269 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1014 19:40:33.081853  437269 command_runner.go:130] > # changing them here.
	I1014 19:40:33.081859  437269 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1014 19:40:33.081865  437269 command_runner.go:130] > # insecure_registries = [
	I1014 19:40:33.081868  437269 command_runner.go:130] > # ]
	I1014 19:40:33.081877  437269 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1014 19:40:33.081887  437269 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1014 19:40:33.081893  437269 command_runner.go:130] > # image_volumes = "mkdir"
	I1014 19:40:33.081898  437269 command_runner.go:130] > # Temporary directory to use for storing big files
	I1014 19:40:33.081904  437269 command_runner.go:130] > # big_files_temporary_dir = ""
	I1014 19:40:33.081910  437269 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1014 19:40:33.081918  437269 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1014 19:40:33.081925  437269 command_runner.go:130] > # auto_reload_registries = false
	I1014 19:40:33.081932  437269 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1014 19:40:33.081940  437269 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1014 19:40:33.081947  437269 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1014 19:40:33.081951  437269 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1014 19:40:33.081958  437269 command_runner.go:130] > # The mode of short name resolution.
	I1014 19:40:33.081966  437269 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1014 19:40:33.081977  437269 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1014 19:40:33.081984  437269 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1014 19:40:33.081989  437269 command_runner.go:130] > # short_name_mode = "enforcing"
	I1014 19:40:33.081997  437269 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1014 19:40:33.082002  437269 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1014 19:40:33.082009  437269 command_runner.go:130] > # oci_artifact_mount_support = true
	I1014 19:40:33.082015  437269 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1014 19:40:33.082021  437269 command_runner.go:130] > # CNI plugins.
	I1014 19:40:33.082025  437269 command_runner.go:130] > [crio.network]
	I1014 19:40:33.082033  437269 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1014 19:40:33.082040  437269 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1014 19:40:33.082044  437269 command_runner.go:130] > # cni_default_network = ""
	I1014 19:40:33.082052  437269 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1014 19:40:33.082056  437269 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1014 19:40:33.082064  437269 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1014 19:40:33.082068  437269 command_runner.go:130] > # plugin_dirs = [
	I1014 19:40:33.082071  437269 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1014 19:40:33.082074  437269 command_runner.go:130] > # ]
	I1014 19:40:33.082078  437269 command_runner.go:130] > # List of included pod metrics.
	I1014 19:40:33.082082  437269 command_runner.go:130] > # included_pod_metrics = [
	I1014 19:40:33.082085  437269 command_runner.go:130] > # ]
	I1014 19:40:33.082089  437269 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1014 19:40:33.082092  437269 command_runner.go:130] > [crio.metrics]
	I1014 19:40:33.082097  437269 command_runner.go:130] > # Globally enable or disable metrics support.
	I1014 19:40:33.082100  437269 command_runner.go:130] > # enable_metrics = false
	I1014 19:40:33.082104  437269 command_runner.go:130] > # Specify enabled metrics collectors.
	I1014 19:40:33.082108  437269 command_runner.go:130] > # Per default all metrics are enabled.
	I1014 19:40:33.082114  437269 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1014 19:40:33.082119  437269 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1014 19:40:33.082124  437269 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1014 19:40:33.082128  437269 command_runner.go:130] > # metrics_collectors = [
	I1014 19:40:33.082131  437269 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1014 19:40:33.082135  437269 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1014 19:40:33.082139  437269 command_runner.go:130] > # 	"containers_oom_total",
	I1014 19:40:33.082142  437269 command_runner.go:130] > # 	"processes_defunct",
	I1014 19:40:33.082146  437269 command_runner.go:130] > # 	"operations_total",
	I1014 19:40:33.082150  437269 command_runner.go:130] > # 	"operations_latency_seconds",
	I1014 19:40:33.082154  437269 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1014 19:40:33.082157  437269 command_runner.go:130] > # 	"operations_errors_total",
	I1014 19:40:33.082162  437269 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1014 19:40:33.082169  437269 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1014 19:40:33.082173  437269 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1014 19:40:33.082178  437269 command_runner.go:130] > # 	"image_pulls_success_total",
	I1014 19:40:33.082182  437269 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1014 19:40:33.082188  437269 command_runner.go:130] > # 	"containers_oom_count_total",
	I1014 19:40:33.082193  437269 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1014 19:40:33.082199  437269 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1014 19:40:33.082203  437269 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1014 19:40:33.082208  437269 command_runner.go:130] > # ]
	I1014 19:40:33.082214  437269 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1014 19:40:33.082219  437269 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1014 19:40:33.082224  437269 command_runner.go:130] > # The port on which the metrics server will listen.
	I1014 19:40:33.082227  437269 command_runner.go:130] > # metrics_port = 9090
	I1014 19:40:33.082234  437269 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1014 19:40:33.082238  437269 command_runner.go:130] > # metrics_socket = ""
	I1014 19:40:33.082245  437269 command_runner.go:130] > # The certificate for the secure metrics server.
	I1014 19:40:33.082250  437269 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1014 19:40:33.082258  437269 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1014 19:40:33.082263  437269 command_runner.go:130] > # certificate on any modification event.
	I1014 19:40:33.082269  437269 command_runner.go:130] > # metrics_cert = ""
	I1014 19:40:33.082274  437269 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1014 19:40:33.082280  437269 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1014 19:40:33.082284  437269 command_runner.go:130] > # metrics_key = ""
	I1014 19:40:33.082292  437269 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1014 19:40:33.082295  437269 command_runner.go:130] > [crio.tracing]
	I1014 19:40:33.082300  437269 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1014 19:40:33.082306  437269 command_runner.go:130] > # enable_tracing = false
	I1014 19:40:33.082311  437269 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1014 19:40:33.082317  437269 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1014 19:40:33.082324  437269 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1014 19:40:33.082330  437269 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1014 19:40:33.082334  437269 command_runner.go:130] > # CRI-O NRI configuration.
	I1014 19:40:33.082340  437269 command_runner.go:130] > [crio.nri]
	I1014 19:40:33.082345  437269 command_runner.go:130] > # Globally enable or disable NRI.
	I1014 19:40:33.082350  437269 command_runner.go:130] > # enable_nri = true
	I1014 19:40:33.082354  437269 command_runner.go:130] > # NRI socket to listen on.
	I1014 19:40:33.082361  437269 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1014 19:40:33.082365  437269 command_runner.go:130] > # NRI plugin directory to use.
	I1014 19:40:33.082372  437269 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1014 19:40:33.082376  437269 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1014 19:40:33.082383  437269 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1014 19:40:33.082388  437269 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1014 19:40:33.082423  437269 command_runner.go:130] > # nri_disable_connections = false
	I1014 19:40:33.082431  437269 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1014 19:40:33.082435  437269 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1014 19:40:33.082440  437269 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1014 19:40:33.082444  437269 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1014 19:40:33.082451  437269 command_runner.go:130] > # NRI default validator configuration.
	I1014 19:40:33.082457  437269 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1014 19:40:33.082466  437269 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1014 19:40:33.082472  437269 command_runner.go:130] > # can be restricted/rejected:
	I1014 19:40:33.082476  437269 command_runner.go:130] > # - OCI hook injection
	I1014 19:40:33.082483  437269 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1014 19:40:33.082487  437269 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1014 19:40:33.082494  437269 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1014 19:40:33.082498  437269 command_runner.go:130] > # - adjustment of linux namespaces
	I1014 19:40:33.082506  437269 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1014 19:40:33.082514  437269 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1014 19:40:33.082519  437269 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1014 19:40:33.082524  437269 command_runner.go:130] > #
	I1014 19:40:33.082528  437269 command_runner.go:130] > # [crio.nri.default_validator]
	I1014 19:40:33.082535  437269 command_runner.go:130] > # nri_enable_default_validator = false
	I1014 19:40:33.082539  437269 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1014 19:40:33.082546  437269 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1014 19:40:33.082551  437269 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1014 19:40:33.082559  437269 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1014 19:40:33.082564  437269 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1014 19:40:33.082570  437269 command_runner.go:130] > # nri_validator_required_plugins = [
	I1014 19:40:33.082573  437269 command_runner.go:130] > # ]
	I1014 19:40:33.082582  437269 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1014 19:40:33.082587  437269 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1014 19:40:33.082593  437269 command_runner.go:130] > [crio.stats]
	I1014 19:40:33.082598  437269 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1014 19:40:33.082608  437269 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1014 19:40:33.082614  437269 command_runner.go:130] > # stats_collection_period = 0
	I1014 19:40:33.082619  437269 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1014 19:40:33.082628  437269 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1014 19:40:33.082631  437269 command_runner.go:130] > # collection_period = 0
	I1014 19:40:33.082741  437269 cni.go:84] Creating CNI manager for ""
	I1014 19:40:33.082769  437269 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1014 19:40:33.082789  437269 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1014 19:40:33.082811  437269 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-744288 NodeName:functional-744288 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1014 19:40:33.082940  437269 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-744288"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 19:40:33.083002  437269 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1014 19:40:33.091321  437269 command_runner.go:130] > kubeadm
	I1014 19:40:33.091339  437269 command_runner.go:130] > kubectl
	I1014 19:40:33.091351  437269 command_runner.go:130] > kubelet
	I1014 19:40:33.091376  437269 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 19:40:33.091429  437269 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1014 19:40:33.099086  437269 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1014 19:40:33.111962  437269 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 19:40:33.125422  437269 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1014 19:40:33.138383  437269 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1014 19:40:33.142436  437269 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1014 19:40:33.142515  437269 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 19:40:33.229714  437269 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 19:40:33.242948  437269 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288 for IP: 192.168.49.2
	I1014 19:40:33.242967  437269 certs.go:195] generating shared ca certs ...
	I1014 19:40:33.242983  437269 certs.go:227] acquiring lock for ca certs: {Name:mk332cc83f1e1747534fc81e896bed94f24941d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 19:40:33.243111  437269 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.key
	I1014 19:40:33.243147  437269 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.key
	I1014 19:40:33.243157  437269 certs.go:257] generating profile certs ...
	I1014 19:40:33.243244  437269 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/client.key
	I1014 19:40:33.243295  437269 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/apiserver.key.d065d9e2
	I1014 19:40:33.243331  437269 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/proxy-client.key
	I1014 19:40:33.243342  437269 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1014 19:40:33.243354  437269 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1014 19:40:33.243366  437269 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1014 19:40:33.243378  437269 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1014 19:40:33.243389  437269 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1014 19:40:33.243402  437269 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1014 19:40:33.243414  437269 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1014 19:40:33.243426  437269 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1014 19:40:33.243468  437269 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/417373.pem (1338 bytes)
	W1014 19:40:33.243499  437269 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-413763/.minikube/certs/417373_empty.pem, impossibly tiny 0 bytes
	I1014 19:40:33.243509  437269 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca-key.pem (1675 bytes)
	I1014 19:40:33.243528  437269 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem (1078 bytes)
	I1014 19:40:33.243550  437269 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem (1123 bytes)
	I1014 19:40:33.243570  437269 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem (1675 bytes)
	I1014 19:40:33.243605  437269 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem (1708 bytes)
	I1014 19:40:33.243631  437269 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1014 19:40:33.243646  437269 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/417373.pem -> /usr/share/ca-certificates/417373.pem
	I1014 19:40:33.243657  437269 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem -> /usr/share/ca-certificates/4173732.pem
	I1014 19:40:33.244241  437269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 19:40:33.262628  437269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1014 19:40:33.280949  437269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 19:40:33.299645  437269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1014 19:40:33.318581  437269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1014 19:40:33.336772  437269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1014 19:40:33.354893  437269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 19:40:33.372224  437269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1014 19:40:33.389816  437269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 19:40:33.407785  437269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/certs/417373.pem --> /usr/share/ca-certificates/417373.pem (1338 bytes)
	I1014 19:40:33.425006  437269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem --> /usr/share/ca-certificates/4173732.pem (1708 bytes)
	I1014 19:40:33.442414  437269 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 19:40:33.455418  437269 ssh_runner.go:195] Run: openssl version
	I1014 19:40:33.461786  437269 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1014 19:40:33.461878  437269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/417373.pem && ln -fs /usr/share/ca-certificates/417373.pem /etc/ssl/certs/417373.pem"
	I1014 19:40:33.470707  437269 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/417373.pem
	I1014 19:40:33.474930  437269 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct 14 19:32 /usr/share/ca-certificates/417373.pem
	I1014 19:40:33.474991  437269 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 19:32 /usr/share/ca-certificates/417373.pem
	I1014 19:40:33.475040  437269 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/417373.pem
	I1014 19:40:33.510084  437269 command_runner.go:130] > 51391683
	I1014 19:40:33.510386  437269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/417373.pem /etc/ssl/certs/51391683.0"
	I1014 19:40:33.519147  437269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4173732.pem && ln -fs /usr/share/ca-certificates/4173732.pem /etc/ssl/certs/4173732.pem"
	I1014 19:40:33.528110  437269 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4173732.pem
	I1014 19:40:33.532126  437269 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct 14 19:32 /usr/share/ca-certificates/4173732.pem
	I1014 19:40:33.532195  437269 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 19:32 /usr/share/ca-certificates/4173732.pem
	I1014 19:40:33.532237  437269 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4173732.pem
	I1014 19:40:33.566452  437269 command_runner.go:130] > 3ec20f2e
	I1014 19:40:33.566529  437269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4173732.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 19:40:33.575059  437269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 19:40:33.583998  437269 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 19:40:33.587961  437269 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct 14 19:15 /usr/share/ca-certificates/minikubeCA.pem
	I1014 19:40:33.588033  437269 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 19:15 /usr/share/ca-certificates/minikubeCA.pem
	I1014 19:40:33.588081  437269 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 19:40:33.622398  437269 command_runner.go:130] > b5213941
	I1014 19:40:33.622796  437269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 19:40:33.631371  437269 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 19:40:33.635295  437269 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 19:40:33.635320  437269 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1014 19:40:33.635326  437269 command_runner.go:130] > Device: 8,1	Inode: 573968      Links: 1
	I1014 19:40:33.635332  437269 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1014 19:40:33.635341  437269 command_runner.go:130] > Access: 2025-10-14 19:36:24.950222095 +0000
	I1014 19:40:33.635346  437269 command_runner.go:130] > Modify: 2025-10-14 19:32:20.041123235 +0000
	I1014 19:40:33.635350  437269 command_runner.go:130] > Change: 2025-10-14 19:32:20.041123235 +0000
	I1014 19:40:33.635355  437269 command_runner.go:130] >  Birth: 2025-10-14 19:32:20.041123235 +0000
	I1014 19:40:33.635409  437269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1014 19:40:33.669731  437269 command_runner.go:130] > Certificate will not expire
	I1014 19:40:33.670080  437269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1014 19:40:33.705048  437269 command_runner.go:130] > Certificate will not expire
	I1014 19:40:33.705140  437269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1014 19:40:33.739547  437269 command_runner.go:130] > Certificate will not expire
	I1014 19:40:33.739632  437269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1014 19:40:33.774590  437269 command_runner.go:130] > Certificate will not expire
	I1014 19:40:33.774998  437269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1014 19:40:33.810800  437269 command_runner.go:130] > Certificate will not expire
	I1014 19:40:33.810892  437269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1014 19:40:33.846191  437269 command_runner.go:130] > Certificate will not expire
	I1014 19:40:33.846525  437269 kubeadm.go:400] StartCluster: {Name:functional-744288 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-744288 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 19:40:33.846626  437269 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 19:40:33.846701  437269 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 19:40:33.876026  437269 cri.go:89] found id: ""
	I1014 19:40:33.876095  437269 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 19:40:33.883772  437269 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1014 19:40:33.883800  437269 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1014 19:40:33.883806  437269 command_runner.go:130] > /var/lib/minikube/etcd:
	I1014 19:40:33.884383  437269 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1014 19:40:33.884404  437269 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1014 19:40:33.884457  437269 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1014 19:40:33.892144  437269 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1014 19:40:33.892232  437269 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-744288" does not appear in /home/jenkins/minikube-integration/21409-413763/kubeconfig
	I1014 19:40:33.892262  437269 kubeconfig.go:62] /home/jenkins/minikube-integration/21409-413763/kubeconfig needs updating (will repair): [kubeconfig missing "functional-744288" cluster setting kubeconfig missing "functional-744288" context setting]
	I1014 19:40:33.892554  437269 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/kubeconfig: {Name:mk8f52e50cf00a1f90024c40a36e49753857e33b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 19:40:33.893171  437269 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21409-413763/kubeconfig
	I1014 19:40:33.893322  437269 kapi.go:59] client config for functional-744288: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/client.crt", KeyFile:"/home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/client.key", CAFile:"/home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1014 19:40:33.893776  437269 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1014 19:40:33.893798  437269 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1014 19:40:33.893803  437269 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1014 19:40:33.893807  437269 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1014 19:40:33.893810  437269 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1014 19:40:33.893821  437269 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1014 19:40:33.894261  437269 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1014 19:40:33.902475  437269 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1014 19:40:33.902513  437269 kubeadm.go:601] duration metric: took 18.102158ms to restartPrimaryControlPlane
	I1014 19:40:33.902527  437269 kubeadm.go:402] duration metric: took 56.015342ms to StartCluster
	I1014 19:40:33.902549  437269 settings.go:142] acquiring lock: {Name:mke99e63954bc0385c76f9fa1a80091fa7740a6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 19:40:33.902670  437269 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21409-413763/kubeconfig
	I1014 19:40:33.903326  437269 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/kubeconfig: {Name:mk8f52e50cf00a1f90024c40a36e49753857e33b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 19:40:33.903559  437269 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 19:40:33.903636  437269 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1014 19:40:33.903763  437269 addons.go:69] Setting storage-provisioner=true in profile "functional-744288"
	I1014 19:40:33.903782  437269 config.go:182] Loaded profile config "functional-744288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 19:40:33.903793  437269 addons.go:69] Setting default-storageclass=true in profile "functional-744288"
	I1014 19:40:33.903791  437269 addons.go:238] Setting addon storage-provisioner=true in "functional-744288"
	I1014 19:40:33.903828  437269 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-744288"
	I1014 19:40:33.903863  437269 host.go:66] Checking if "functional-744288" exists ...
	I1014 19:40:33.904105  437269 cli_runner.go:164] Run: docker container inspect functional-744288 --format={{.State.Status}}
	I1014 19:40:33.904258  437269 cli_runner.go:164] Run: docker container inspect functional-744288 --format={{.State.Status}}
	I1014 19:40:33.906507  437269 out.go:179] * Verifying Kubernetes components...
	I1014 19:40:33.907562  437269 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 19:40:33.925699  437269 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21409-413763/kubeconfig
	I1014 19:40:33.925934  437269 kapi.go:59] client config for functional-744288: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/client.crt", KeyFile:"/home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/client.key", CAFile:"/home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1014 19:40:33.926358  437269 addons.go:238] Setting addon default-storageclass=true in "functional-744288"
	I1014 19:40:33.926409  437269 host.go:66] Checking if "functional-744288" exists ...
	I1014 19:40:33.926937  437269 cli_runner.go:164] Run: docker container inspect functional-744288 --format={{.State.Status}}
	I1014 19:40:33.928366  437269 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 19:40:33.930195  437269 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 19:40:33.930216  437269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1014 19:40:33.930272  437269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-744288
	I1014 19:40:33.952215  437269 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1014 19:40:33.952244  437269 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1014 19:40:33.952310  437269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-744288
	I1014 19:40:33.956857  437269 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/functional-744288/id_rsa Username:docker}
	I1014 19:40:33.971706  437269 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/functional-744288/id_rsa Username:docker}
	I1014 19:40:34.006948  437269 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 19:40:34.021044  437269 node_ready.go:35] waiting up to 6m0s for node "functional-744288" to be "Ready" ...
	I1014 19:40:34.021181  437269 type.go:168] "Request Body" body=""
	I1014 19:40:34.021246  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:34.021571  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:34.069169  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 19:40:34.082461  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1014 19:40:34.132558  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:40:34.132646  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:34.132686  437269 retry.go:31] will retry after 329.296623ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:34.141809  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:40:34.144515  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:34.144547  437269 retry.go:31] will retry after 261.501781ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:34.407171  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1014 19:40:34.461386  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:40:34.461450  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:34.461492  437269 retry.go:31] will retry after 293.495478ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:34.462464  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 19:40:34.513733  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:40:34.516544  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:34.516582  437269 retry.go:31] will retry after 480.429339ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:34.521783  437269 type.go:168] "Request Body" body=""
	I1014 19:40:34.521866  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:34.522176  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:34.755667  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1014 19:40:34.810676  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:40:34.810724  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:34.810744  437269 retry.go:31] will retry after 614.479011ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:34.998090  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 19:40:35.021962  437269 type.go:168] "Request Body" body=""
	I1014 19:40:35.022038  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:35.022373  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:35.049799  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:40:35.052676  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:35.052709  437269 retry.go:31] will retry after 432.01436ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:35.426352  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1014 19:40:35.482403  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:40:35.482455  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:35.482485  437269 retry.go:31] will retry after 1.057612851s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:35.485602  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 19:40:35.522076  437269 type.go:168] "Request Body" body=""
	I1014 19:40:35.522160  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:35.522499  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:35.537729  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:40:35.540612  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:35.540651  437269 retry.go:31] will retry after 1.151923723s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:36.021224  437269 type.go:168] "Request Body" body=""
	I1014 19:40:36.021306  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:36.021677  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:40:36.021751  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:40:36.521540  437269 type.go:168] "Request Body" body=""
	I1014 19:40:36.521648  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:36.522005  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:36.541250  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1014 19:40:36.596277  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:40:36.596343  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:36.596366  437269 retry.go:31] will retry after 858.341252ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:36.693590  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 19:40:36.746070  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:40:36.749114  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:36.749145  437269 retry.go:31] will retry after 1.225575657s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:37.021547  437269 type.go:168] "Request Body" body=""
	I1014 19:40:37.021641  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:37.022054  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:37.455821  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1014 19:40:37.511587  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:40:37.511647  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:37.511676  437269 retry.go:31] will retry after 1.002490371s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:37.521830  437269 type.go:168] "Request Body" body=""
	I1014 19:40:37.521912  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:37.522269  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:37.974939  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 19:40:38.021626  437269 type.go:168] "Request Body" body=""
	I1014 19:40:38.021748  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:38.022116  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:40:38.022184  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:40:38.027734  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:40:38.030470  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:38.030507  437269 retry.go:31] will retry after 1.025461199s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:38.515193  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1014 19:40:38.521814  437269 type.go:168] "Request Body" body=""
	I1014 19:40:38.521914  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:38.522290  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:38.567735  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:40:38.570434  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:38.570473  437269 retry.go:31] will retry after 1.83061983s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:39.022158  437269 type.go:168] "Request Body" body=""
	I1014 19:40:39.022254  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:39.022656  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:39.056879  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 19:40:39.109896  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:40:39.112847  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:39.112884  437269 retry.go:31] will retry after 3.104822489s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:39.521355  437269 type.go:168] "Request Body" body=""
	I1014 19:40:39.521439  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:39.521856  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:40.021689  437269 type.go:168] "Request Body" body=""
	I1014 19:40:40.021785  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:40.022244  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:40:40.022320  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:40:40.401833  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1014 19:40:40.453343  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:40:40.456347  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:40.456387  437269 retry.go:31] will retry after 3.646877865s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:40.521651  437269 type.go:168] "Request Body" body=""
	I1014 19:40:40.521728  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:40.522111  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:41.021801  437269 type.go:168] "Request Body" body=""
	I1014 19:40:41.021897  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:41.022239  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:41.521918  437269 type.go:168] "Request Body" body=""
	I1014 19:40:41.522016  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:41.522380  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:42.022132  437269 type.go:168] "Request Body" body=""
	I1014 19:40:42.022218  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:42.022586  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:40:42.022649  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:40:42.217895  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 19:40:42.273119  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:40:42.273178  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:42.273199  437269 retry.go:31] will retry after 5.13792128s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:42.521564  437269 type.go:168] "Request Body" body=""
	I1014 19:40:42.521722  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:42.522122  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:43.022026  437269 type.go:168] "Request Body" body=""
	I1014 19:40:43.022112  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:43.022464  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:43.521291  437269 type.go:168] "Request Body" body=""
	I1014 19:40:43.521385  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:43.521849  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:44.021813  437269 type.go:168] "Request Body" body=""
	I1014 19:40:44.021907  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:44.022272  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:44.103502  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1014 19:40:44.156724  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:40:44.159470  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:44.159502  437269 retry.go:31] will retry after 6.372961743s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:44.522197  437269 type.go:168] "Request Body" body=""
	I1014 19:40:44.522316  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:44.522799  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:40:44.522878  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:40:45.021683  437269 type.go:168] "Request Body" body=""
	I1014 19:40:45.021776  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:45.022120  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:45.521709  437269 type.go:168] "Request Body" body=""
	I1014 19:40:45.521833  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:45.522209  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:46.021967  437269 type.go:168] "Request Body" body=""
	I1014 19:40:46.022064  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:46.022441  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:46.522085  437269 type.go:168] "Request Body" body=""
	I1014 19:40:46.522181  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:46.522556  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:47.022210  437269 type.go:168] "Request Body" body=""
	I1014 19:40:47.022296  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:47.022645  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:40:47.022716  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:40:47.412207  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 19:40:47.466705  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:40:47.466772  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:47.466800  437269 retry.go:31] will retry after 6.31356698s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:47.521972  437269 type.go:168] "Request Body" body=""
	I1014 19:40:47.522061  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:47.522426  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:48.022131  437269 type.go:168] "Request Body" body=""
	I1014 19:40:48.022208  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:48.022593  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:48.522267  437269 type.go:168] "Request Body" body=""
	I1014 19:40:48.522351  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:48.522727  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:49.021317  437269 type.go:168] "Request Body" body=""
	I1014 19:40:49.021410  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:49.021831  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:49.521375  437269 type.go:168] "Request Body" body=""
	I1014 19:40:49.521474  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:49.521884  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:40:49.521959  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:40:50.021803  437269 type.go:168] "Request Body" body=""
	I1014 19:40:50.021896  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:50.022319  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:50.521972  437269 type.go:168] "Request Body" body=""
	I1014 19:40:50.522068  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:50.522461  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:50.533648  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1014 19:40:50.590568  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:40:50.590621  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:50.590649  437269 retry.go:31] will retry after 8.10133009s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:51.022238  437269 type.go:168] "Request Body" body=""
	I1014 19:40:51.022324  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:51.022671  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:51.521259  437269 type.go:168] "Request Body" body=""
	I1014 19:40:51.521354  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:51.521737  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:52.021339  437269 type.go:168] "Request Body" body=""
	I1014 19:40:52.021436  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:52.021838  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:40:52.021911  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:40:52.521431  437269 type.go:168] "Request Body" body=""
	I1014 19:40:52.521523  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:52.521914  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:53.021515  437269 type.go:168] "Request Body" body=""
	I1014 19:40:53.021632  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:53.022015  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:53.521582  437269 type.go:168] "Request Body" body=""
	I1014 19:40:53.521689  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:53.522061  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:53.781554  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 19:40:53.838039  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:40:53.838101  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:53.838128  437269 retry.go:31] will retry after 9.837531091s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:54.021666  437269 type.go:168] "Request Body" body=""
	I1014 19:40:54.021771  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:54.022166  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:40:54.022235  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:40:54.521778  437269 type.go:168] "Request Body" body=""
	I1014 19:40:54.521864  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:54.522222  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:55.022074  437269 type.go:168] "Request Body" body=""
	I1014 19:40:55.022163  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:55.022522  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:55.522140  437269 type.go:168] "Request Body" body=""
	I1014 19:40:55.522219  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:55.522653  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:56.021265  437269 type.go:168] "Request Body" body=""
	I1014 19:40:56.021344  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:56.021726  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:56.521342  437269 type.go:168] "Request Body" body=""
	I1014 19:40:56.521439  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:56.521872  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:40:56.521945  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:40:57.021424  437269 type.go:168] "Request Body" body=""
	I1014 19:40:57.021552  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:57.021974  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:57.521651  437269 type.go:168] "Request Body" body=""
	I1014 19:40:57.521797  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:57.522216  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:58.021903  437269 type.go:168] "Request Body" body=""
	I1014 19:40:58.021987  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:58.022398  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:58.522085  437269 type.go:168] "Request Body" body=""
	I1014 19:40:58.522169  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:58.522556  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:40:58.522630  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:40:58.692921  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1014 19:40:58.746193  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:40:58.749262  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:58.749295  437269 retry.go:31] will retry after 17.735335575s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:59.021769  437269 type.go:168] "Request Body" body=""
	I1014 19:40:59.021862  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:59.022229  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:59.521888  437269 type.go:168] "Request Body" body=""
	I1014 19:40:59.522001  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:59.522349  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:00.021702  437269 type.go:168] "Request Body" body=""
	I1014 19:41:00.021801  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:00.022202  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:00.522173  437269 type.go:168] "Request Body" body=""
	I1014 19:41:00.522273  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:00.522632  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:00.522721  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:01.021455  437269 type.go:168] "Request Body" body=""
	I1014 19:41:01.021548  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:01.021937  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:01.521738  437269 type.go:168] "Request Body" body=""
	I1014 19:41:01.521858  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:01.522279  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:02.022194  437269 type.go:168] "Request Body" body=""
	I1014 19:41:02.022289  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:02.022725  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:02.521517  437269 type.go:168] "Request Body" body=""
	I1014 19:41:02.521656  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:02.522050  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:03.021919  437269 type.go:168] "Request Body" body=""
	I1014 19:41:03.022009  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:03.022403  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:03.022475  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:03.522212  437269 type.go:168] "Request Body" body=""
	I1014 19:41:03.522291  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:03.522659  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:03.675962  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 19:41:03.727887  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:41:03.730521  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:41:03.730562  437269 retry.go:31] will retry after 19.438885547s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:41:04.022253  437269 type.go:168] "Request Body" body=""
	I1014 19:41:04.022379  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:04.022809  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:04.521663  437269 type.go:168] "Request Body" body=""
	I1014 19:41:04.521794  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:04.522180  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:05.021978  437269 type.go:168] "Request Body" body=""
	I1014 19:41:05.022063  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:05.022412  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:05.522231  437269 type.go:168] "Request Body" body=""
	I1014 19:41:05.522314  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:05.522655  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:05.522732  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:06.021349  437269 type.go:168] "Request Body" body=""
	I1014 19:41:06.021429  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:06.021828  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:06.521569  437269 type.go:168] "Request Body" body=""
	I1014 19:41:06.521651  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:06.522040  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:07.021907  437269 type.go:168] "Request Body" body=""
	I1014 19:41:07.021993  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:07.022361  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:07.522243  437269 type.go:168] "Request Body" body=""
	I1014 19:41:07.522333  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:07.522720  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:07.522816  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:08.021308  437269 type.go:168] "Request Body" body=""
	I1014 19:41:08.021386  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:08.021750  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:08.521638  437269 type.go:168] "Request Body" body=""
	I1014 19:41:08.521722  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:08.522125  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:09.021981  437269 type.go:168] "Request Body" body=""
	I1014 19:41:09.022069  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:09.022464  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:09.521240  437269 type.go:168] "Request Body" body=""
	I1014 19:41:09.521389  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:09.521793  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:10.021609  437269 type.go:168] "Request Body" body=""
	I1014 19:41:10.021695  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:10.022108  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:10.022177  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:10.522050  437269 type.go:168] "Request Body" body=""
	I1014 19:41:10.522140  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:10.522549  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:11.021354  437269 type.go:168] "Request Body" body=""
	I1014 19:41:11.021435  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:11.021862  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:11.521641  437269 type.go:168] "Request Body" body=""
	I1014 19:41:11.521740  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:11.522168  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:12.022028  437269 type.go:168] "Request Body" body=""
	I1014 19:41:12.022114  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:12.022483  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:12.022549  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:12.521254  437269 type.go:168] "Request Body" body=""
	I1014 19:41:12.521342  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:12.521740  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:13.021557  437269 type.go:168] "Request Body" body=""
	I1014 19:41:13.021642  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:13.022039  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:13.521864  437269 type.go:168] "Request Body" body=""
	I1014 19:41:13.521953  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:13.522323  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:14.022194  437269 type.go:168] "Request Body" body=""
	I1014 19:41:14.022287  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:14.022654  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:14.022724  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:14.521434  437269 type.go:168] "Request Body" body=""
	I1014 19:41:14.521526  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:14.521992  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:15.021751  437269 type.go:168] "Request Body" body=""
	I1014 19:41:15.021849  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:15.022211  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:15.522050  437269 type.go:168] "Request Body" body=""
	I1014 19:41:15.522133  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:15.522522  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:16.021287  437269 type.go:168] "Request Body" body=""
	I1014 19:41:16.021373  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:16.021781  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:16.485413  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1014 19:41:16.522201  437269 type.go:168] "Request Body" body=""
	I1014 19:41:16.522279  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:16.522623  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:16.522694  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:16.537285  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:41:16.540211  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:41:16.540239  437269 retry.go:31] will retry after 23.522391633s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:41:17.021909  437269 type.go:168] "Request Body" body=""
	I1014 19:41:17.022015  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:17.022407  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:17.522283  437269 type.go:168] "Request Body" body=""
	I1014 19:41:17.522380  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:17.522743  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:18.021576  437269 type.go:168] "Request Body" body=""
	I1014 19:41:18.021671  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:18.022118  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:18.522003  437269 type.go:168] "Request Body" body=""
	I1014 19:41:18.522089  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:18.522516  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:19.021291  437269 type.go:168] "Request Body" body=""
	I1014 19:41:19.021372  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:19.021747  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:19.021855  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:19.521591  437269 type.go:168] "Request Body" body=""
	I1014 19:41:19.521674  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:19.522078  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:20.021898  437269 type.go:168] "Request Body" body=""
	I1014 19:41:20.021987  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:20.022480  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:20.521321  437269 type.go:168] "Request Body" body=""
	I1014 19:41:20.521403  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:20.521841  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:21.021619  437269 type.go:168] "Request Body" body=""
	I1014 19:41:21.021721  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:21.022173  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:21.022242  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:21.522084  437269 type.go:168] "Request Body" body=""
	I1014 19:41:21.522176  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:21.522550  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:22.021344  437269 type.go:168] "Request Body" body=""
	I1014 19:41:22.021423  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:22.021877  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:22.521680  437269 type.go:168] "Request Body" body=""
	I1014 19:41:22.521784  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:22.522158  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:23.022009  437269 type.go:168] "Request Body" body=""
	I1014 19:41:23.022088  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:23.022489  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:23.022557  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:23.169796  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 19:41:23.227015  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:41:23.227096  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:41:23.227121  437269 retry.go:31] will retry after 24.705053737s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:41:23.521443  437269 type.go:168] "Request Body" body=""
	I1014 19:41:23.521533  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:23.522057  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:24.021980  437269 type.go:168] "Request Body" body=""
	I1014 19:41:24.022087  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:24.022457  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:24.522136  437269 type.go:168] "Request Body" body=""
	I1014 19:41:24.522235  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:24.522578  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:25.021598  437269 type.go:168] "Request Body" body=""
	I1014 19:41:25.021741  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:25.022116  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:25.521746  437269 type.go:168] "Request Body" body=""
	I1014 19:41:25.521865  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:25.522300  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:25.522363  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:26.021980  437269 type.go:168] "Request Body" body=""
	I1014 19:41:26.022056  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:26.022462  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:26.522116  437269 type.go:168] "Request Body" body=""
	I1014 19:41:26.522205  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:26.522581  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:27.022289  437269 type.go:168] "Request Body" body=""
	I1014 19:41:27.022379  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:27.022735  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:27.521368  437269 type.go:168] "Request Body" body=""
	I1014 19:41:27.521454  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:27.521879  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:28.021445  437269 type.go:168] "Request Body" body=""
	I1014 19:41:28.021545  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:28.021931  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:28.021996  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:28.521541  437269 type.go:168] "Request Body" body=""
	I1014 19:41:28.521630  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:28.522060  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:29.021664  437269 type.go:168] "Request Body" body=""
	I1014 19:41:29.021774  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:29.022227  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:29.521894  437269 type.go:168] "Request Body" body=""
	I1014 19:41:29.521983  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:29.522351  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:30.022245  437269 type.go:168] "Request Body" body=""
	I1014 19:41:30.022327  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:30.022707  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:30.022824  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:30.521424  437269 type.go:168] "Request Body" body=""
	I1014 19:41:30.521529  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:30.521982  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:31.021342  437269 type.go:168] "Request Body" body=""
	I1014 19:41:31.021429  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:31.021899  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:31.521503  437269 type.go:168] "Request Body" body=""
	I1014 19:41:31.521595  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:31.522014  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:32.021616  437269 type.go:168] "Request Body" body=""
	I1014 19:41:32.021705  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:32.022095  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:32.521679  437269 type.go:168] "Request Body" body=""
	I1014 19:41:32.521783  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:32.522156  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:32.522231  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:33.021778  437269 type.go:168] "Request Body" body=""
	I1014 19:41:33.021859  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:33.022214  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:33.521935  437269 type.go:168] "Request Body" body=""
	I1014 19:41:33.522024  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:33.522446  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:34.021233  437269 type.go:168] "Request Body" body=""
	I1014 19:41:34.021316  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:34.021702  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:34.521364  437269 type.go:168] "Request Body" body=""
	I1014 19:41:34.521444  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:34.521880  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:35.021696  437269 type.go:168] "Request Body" body=""
	I1014 19:41:35.021799  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:35.022177  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:35.022244  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:35.521929  437269 type.go:168] "Request Body" body=""
	I1014 19:41:35.522017  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:35.522385  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:36.022241  437269 type.go:168] "Request Body" body=""
	I1014 19:41:36.022330  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:36.022808  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:36.521609  437269 type.go:168] "Request Body" body=""
	I1014 19:41:36.521699  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:36.522099  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:37.021877  437269 type.go:168] "Request Body" body=""
	I1014 19:41:37.021957  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:37.022344  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:37.022414  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:37.522189  437269 type.go:168] "Request Body" body=""
	I1014 19:41:37.522279  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:37.522617  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:38.021362  437269 type.go:168] "Request Body" body=""
	I1014 19:41:38.021440  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:38.021855  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:38.521628  437269 type.go:168] "Request Body" body=""
	I1014 19:41:38.521722  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:38.522097  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:39.021917  437269 type.go:168] "Request Body" body=""
	I1014 19:41:39.022012  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:39.022384  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:39.022447  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:39.522314  437269 type.go:168] "Request Body" body=""
	I1014 19:41:39.522401  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:39.522788  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:40.021745  437269 type.go:168] "Request Body" body=""
	I1014 19:41:40.021857  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:40.022236  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:40.063502  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1014 19:41:40.119488  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:41:40.119566  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:41:40.119604  437269 retry.go:31] will retry after 34.554126144s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:41:40.522218  437269 type.go:168] "Request Body" body=""
	I1014 19:41:40.522383  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:40.522878  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:41.021513  437269 type.go:168] "Request Body" body=""
	I1014 19:41:41.021597  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:41.021974  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:41.521785  437269 type.go:168] "Request Body" body=""
	I1014 19:41:41.521864  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:41.522250  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:41.522330  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:42.022203  437269 type.go:168] "Request Body" body=""
	I1014 19:41:42.022322  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:42.022810  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:42.521587  437269 type.go:168] "Request Body" body=""
	I1014 19:41:42.521669  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:42.522059  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:43.021981  437269 type.go:168] "Request Body" body=""
	I1014 19:41:43.022074  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:43.022442  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:43.521224  437269 type.go:168] "Request Body" body=""
	I1014 19:41:43.521304  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:43.521705  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:44.021370  437269 type.go:168] "Request Body" body=""
	I1014 19:41:44.021454  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:44.021888  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:44.021956  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:44.521703  437269 type.go:168] "Request Body" body=""
	I1014 19:41:44.521821  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:44.522229  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:45.022076  437269 type.go:168] "Request Body" body=""
	I1014 19:41:45.022158  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:45.022500  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:45.521283  437269 type.go:168] "Request Body" body=""
	I1014 19:41:45.521372  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:45.521787  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:46.021585  437269 type.go:168] "Request Body" body=""
	I1014 19:41:46.021687  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:46.022067  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:46.022144  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:46.521959  437269 type.go:168] "Request Body" body=""
	I1014 19:41:46.522047  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:46.522400  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:47.022244  437269 type.go:168] "Request Body" body=""
	I1014 19:41:47.022326  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:47.022720  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:47.521502  437269 type.go:168] "Request Body" body=""
	I1014 19:41:47.521586  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:47.521971  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:47.932453  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 19:41:47.984361  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:41:47.987254  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:41:47.987292  437269 retry.go:31] will retry after 37.673790461s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:41:48.021563  437269 type.go:168] "Request Body" body=""
	I1014 19:41:48.021661  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:48.022072  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:48.521661  437269 type.go:168] "Request Body" body=""
	I1014 19:41:48.521746  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:48.522153  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:48.522222  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:49.021778  437269 type.go:168] "Request Body" body=""
	I1014 19:41:49.021869  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:49.022246  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:49.521919  437269 type.go:168] "Request Body" body=""
	I1014 19:41:49.521999  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:49.522366  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:50.021911  437269 type.go:168] "Request Body" body=""
	I1014 19:41:50.021996  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:50.022358  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:50.522021  437269 type.go:168] "Request Body" body=""
	I1014 19:41:50.522121  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:50.522513  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:50.522647  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:51.022257  437269 type.go:168] "Request Body" body=""
	I1014 19:41:51.022355  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:51.022711  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:51.521301  437269 type.go:168] "Request Body" body=""
	I1014 19:41:51.521377  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:51.521820  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:52.021365  437269 type.go:168] "Request Body" body=""
	I1014 19:41:52.021447  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:52.021844  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:52.521373  437269 type.go:168] "Request Body" body=""
	I1014 19:41:52.521451  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:52.521825  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:53.021413  437269 type.go:168] "Request Body" body=""
	I1014 19:41:53.021513  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:53.021940  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:53.022029  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:53.521560  437269 type.go:168] "Request Body" body=""
	I1014 19:41:53.521663  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:53.522072  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:54.021872  437269 type.go:168] "Request Body" body=""
	I1014 19:41:54.021964  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:54.022312  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:54.521983  437269 type.go:168] "Request Body" body=""
	I1014 19:41:54.522067  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:54.522484  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:55.021263  437269 type.go:168] "Request Body" body=""
	I1014 19:41:55.021357  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:55.021747  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:55.521288  437269 type.go:168] "Request Body" body=""
	I1014 19:41:55.521376  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:55.521739  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:55.521840  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:56.021322  437269 type.go:168] "Request Body" body=""
	I1014 19:41:56.021409  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:56.021840  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:56.521370  437269 type.go:168] "Request Body" body=""
	I1014 19:41:56.521452  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:56.521831  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:57.021963  437269 type.go:168] "Request Body" body=""
	I1014 19:41:57.022041  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:57.022397  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:57.522061  437269 type.go:168] "Request Body" body=""
	I1014 19:41:57.522137  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:57.522480  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:57.522553  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:58.022151  437269 type.go:168] "Request Body" body=""
	I1014 19:41:58.022236  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:58.022597  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:58.522240  437269 type.go:168] "Request Body" body=""
	I1014 19:41:58.522322  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:58.522668  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:59.021247  437269 type.go:168] "Request Body" body=""
	I1014 19:41:59.021333  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:59.021717  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:59.521251  437269 type.go:168] "Request Body" body=""
	I1014 19:41:59.521330  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:59.521703  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:00.021653  437269 type.go:168] "Request Body" body=""
	I1014 19:42:00.021752  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:00.022142  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:00.022220  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:00.522036  437269 type.go:168] "Request Body" body=""
	I1014 19:42:00.522123  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:00.522466  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:01.022199  437269 type.go:168] "Request Body" body=""
	I1014 19:42:01.022290  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:01.022633  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:01.521196  437269 type.go:168] "Request Body" body=""
	I1014 19:42:01.521278  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:01.521637  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:02.022258  437269 type.go:168] "Request Body" body=""
	I1014 19:42:02.022335  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:02.022740  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:02.022848  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:02.521321  437269 type.go:168] "Request Body" body=""
	I1014 19:42:02.521405  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:02.521800  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:03.021313  437269 type.go:168] "Request Body" body=""
	I1014 19:42:03.021392  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:03.021749  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:03.521348  437269 type.go:168] "Request Body" body=""
	I1014 19:42:03.521443  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:03.521938  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:04.021944  437269 type.go:168] "Request Body" body=""
	I1014 19:42:04.022035  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:04.022414  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:04.522132  437269 type.go:168] "Request Body" body=""
	I1014 19:42:04.522227  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:04.522582  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:04.522653  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:05.021481  437269 type.go:168] "Request Body" body=""
	I1014 19:42:05.021561  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:05.021905  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:05.521556  437269 type.go:168] "Request Body" body=""
	I1014 19:42:05.521637  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:05.522027  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:06.021613  437269 type.go:168] "Request Body" body=""
	I1014 19:42:06.021699  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:06.022057  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:06.521633  437269 type.go:168] "Request Body" body=""
	I1014 19:42:06.521719  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:06.522075  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:07.021749  437269 type.go:168] "Request Body" body=""
	I1014 19:42:07.021848  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:07.022194  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:07.022260  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:07.521871  437269 type.go:168] "Request Body" body=""
	I1014 19:42:07.521957  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:07.522300  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:08.021955  437269 type.go:168] "Request Body" body=""
	I1014 19:42:08.022031  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:08.022379  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:08.522039  437269 type.go:168] "Request Body" body=""
	I1014 19:42:08.522117  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:08.522476  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:09.022164  437269 type.go:168] "Request Body" body=""
	I1014 19:42:09.022254  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:09.022634  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:09.022701  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:09.521239  437269 type.go:168] "Request Body" body=""
	I1014 19:42:09.521333  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:09.521715  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:10.021732  437269 type.go:168] "Request Body" body=""
	I1014 19:42:10.021859  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:10.022260  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:10.521865  437269 type.go:168] "Request Body" body=""
	I1014 19:42:10.521952  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:10.522296  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:11.021963  437269 type.go:168] "Request Body" body=""
	I1014 19:42:11.022051  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:11.022419  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:11.522129  437269 type.go:168] "Request Body" body=""
	I1014 19:42:11.522219  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:11.522604  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:11.522681  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:12.022256  437269 type.go:168] "Request Body" body=""
	I1014 19:42:12.022343  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:12.022700  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:12.521278  437269 type.go:168] "Request Body" body=""
	I1014 19:42:12.521359  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:12.521732  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:13.022114  437269 type.go:168] "Request Body" body=""
	I1014 19:42:13.022198  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:13.022561  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:13.522240  437269 type.go:168] "Request Body" body=""
	I1014 19:42:13.522319  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:13.522711  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:13.522798  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:14.021579  437269 type.go:168] "Request Body" body=""
	I1014 19:42:14.021707  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:14.022154  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:14.521710  437269 type.go:168] "Request Body" body=""
	I1014 19:42:14.521880  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:14.522225  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:14.674573  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1014 19:42:14.729085  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:42:14.729138  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:42:14.729273  437269 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1014 19:42:15.021737  437269 type.go:168] "Request Body" body=""
	I1014 19:42:15.021834  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:15.022205  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:15.521930  437269 type.go:168] "Request Body" body=""
	I1014 19:42:15.522012  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:15.522372  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:16.022056  437269 type.go:168] "Request Body" body=""
	I1014 19:42:16.022143  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:16.022542  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:16.022609  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:16.522173  437269 type.go:168] "Request Body" body=""
	I1014 19:42:16.522253  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:16.522604  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:17.021294  437269 type.go:168] "Request Body" body=""
	I1014 19:42:17.021370  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:17.021733  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:17.521444  437269 type.go:168] "Request Body" body=""
	I1014 19:42:17.521548  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:17.521910  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:18.022124  437269 type.go:168] "Request Body" body=""
	I1014 19:42:18.022209  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:18.022551  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:18.022636  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:18.522199  437269 type.go:168] "Request Body" body=""
	I1014 19:42:18.522276  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:18.522605  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:19.022258  437269 type.go:168] "Request Body" body=""
	I1014 19:42:19.022337  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:19.022731  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:19.521317  437269 type.go:168] "Request Body" body=""
	I1014 19:42:19.521448  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:19.521836  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:20.021610  437269 type.go:168] "Request Body" body=""
	I1014 19:42:20.021710  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:20.022103  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:20.521709  437269 type.go:168] "Request Body" body=""
	I1014 19:42:20.521810  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:20.522173  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:20.522240  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:21.021782  437269 type.go:168] "Request Body" body=""
	I1014 19:42:21.021881  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:21.022300  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:21.521996  437269 type.go:168] "Request Body" body=""
	I1014 19:42:21.522075  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:21.522493  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:22.022092  437269 type.go:168] "Request Body" body=""
	I1014 19:42:22.022170  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:22.022570  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:22.522183  437269 type.go:168] "Request Body" body=""
	I1014 19:42:22.522272  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:22.522625  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:22.522688  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:23.021971  437269 type.go:168] "Request Body" body=""
	I1014 19:42:23.022063  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:23.022422  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:23.522081  437269 type.go:168] "Request Body" body=""
	I1014 19:42:23.522162  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:23.522509  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:24.022288  437269 type.go:168] "Request Body" body=""
	I1014 19:42:24.022385  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:24.022833  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:24.521351  437269 type.go:168] "Request Body" body=""
	I1014 19:42:24.521424  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:24.521791  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:25.021730  437269 type.go:168] "Request Body" body=""
	I1014 19:42:25.021831  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:25.022212  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:25.022288  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:25.521848  437269 type.go:168] "Request Body" body=""
	I1014 19:42:25.521952  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:25.522288  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:25.661672  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 19:42:25.715017  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:42:25.717809  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:42:25.717938  437269 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1014 19:42:25.719888  437269 out.go:179] * Enabled addons: 
	I1014 19:42:25.722455  437269 addons.go:514] duration metric: took 1m51.818834592s for enable addons: enabled=[]
	I1014 19:42:26.021269  437269 type.go:168] "Request Body" body=""
	I1014 19:42:26.021349  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:26.021816  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:26.521369  437269 type.go:168] "Request Body" body=""
	I1014 19:42:26.521477  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:26.521916  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:27.021507  437269 type.go:168] "Request Body" body=""
	I1014 19:42:27.021605  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:27.021991  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:27.521602  437269 type.go:168] "Request Body" body=""
	I1014 19:42:27.521721  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:27.522084  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:27.522146  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:28.021642  437269 type.go:168] "Request Body" body=""
	I1014 19:42:28.021743  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:28.022116  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:28.521702  437269 type.go:168] "Request Body" body=""
	I1014 19:42:28.521807  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:28.522163  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:29.021797  437269 type.go:168] "Request Body" body=""
	I1014 19:42:29.021903  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:29.022267  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:29.522074  437269 type.go:168] "Request Body" body=""
	I1014 19:42:29.522173  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:29.522553  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:29.522671  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:30.021560  437269 type.go:168] "Request Body" body=""
	I1014 19:42:30.021654  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:30.022115  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:30.521649  437269 type.go:168] "Request Body" body=""
	I1014 19:42:30.521743  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:30.522178  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:31.021725  437269 type.go:168] "Request Body" body=""
	I1014 19:42:31.021826  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:31.022186  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:31.521880  437269 type.go:168] "Request Body" body=""
	I1014 19:42:31.521996  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:31.522379  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:32.021983  437269 type.go:168] "Request Body" body=""
	I1014 19:42:32.022060  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:32.022435  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:32.022510  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:32.522077  437269 type.go:168] "Request Body" body=""
	I1014 19:42:32.522170  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:32.522524  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:33.022165  437269 type.go:168] "Request Body" body=""
	I1014 19:42:33.022248  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:33.022592  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:33.521797  437269 type.go:168] "Request Body" body=""
	I1014 19:42:33.522204  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:33.522657  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:34.021345  437269 type.go:168] "Request Body" body=""
	I1014 19:42:34.021435  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:34.021864  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:34.521442  437269 type.go:168] "Request Body" body=""
	I1014 19:42:34.521536  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:34.521932  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:34.522018  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:35.021950  437269 type.go:168] "Request Body" body=""
	I1014 19:42:35.022028  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:35.022451  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:35.521247  437269 type.go:168] "Request Body" body=""
	I1014 19:42:35.521354  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:35.521837  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:36.021379  437269 type.go:168] "Request Body" body=""
	I1014 19:42:36.021471  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:36.021855  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:36.521476  437269 type.go:168] "Request Body" body=""
	I1014 19:42:36.521569  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:36.521989  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:36.522059  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:37.021550  437269 type.go:168] "Request Body" body=""
	I1014 19:42:37.021627  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:37.022016  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:37.521641  437269 type.go:168] "Request Body" body=""
	I1014 19:42:37.521743  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:37.522187  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:38.021859  437269 type.go:168] "Request Body" body=""
	I1014 19:42:38.021939  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:38.022324  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:38.521989  437269 type.go:168] "Request Body" body=""
	I1014 19:42:38.522080  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:38.522434  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:38.522503  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:39.022081  437269 type.go:168] "Request Body" body=""
	I1014 19:42:39.022165  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:39.022503  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:39.522189  437269 type.go:168] "Request Body" body=""
	I1014 19:42:39.522287  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:39.522650  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:40.021651  437269 type.go:168] "Request Body" body=""
	I1014 19:42:40.021735  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:40.022128  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:40.521658  437269 type.go:168] "Request Body" body=""
	I1014 19:42:40.521778  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:40.522143  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:41.021691  437269 type.go:168] "Request Body" body=""
	I1014 19:42:41.021793  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:41.022157  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:41.022225  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:41.521808  437269 type.go:168] "Request Body" body=""
	I1014 19:42:41.521901  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:41.522267  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:42.021874  437269 type.go:168] "Request Body" body=""
	I1014 19:42:42.021955  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:42.022329  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:42.521975  437269 type.go:168] "Request Body" body=""
	I1014 19:42:42.522059  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:42.522405  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:43.022032  437269 type.go:168] "Request Body" body=""
	I1014 19:42:43.022120  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:43.022486  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:43.022552  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:43.522253  437269 type.go:168] "Request Body" body=""
	I1014 19:42:43.522342  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:43.522709  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:44.021548  437269 type.go:168] "Request Body" body=""
	I1014 19:42:44.021646  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:44.022079  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:44.521677  437269 type.go:168] "Request Body" body=""
	I1014 19:42:44.521784  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:44.522202  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:45.022110  437269 type.go:168] "Request Body" body=""
	I1014 19:42:45.022196  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:45.022558  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:45.022661  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:45.522180  437269 type.go:168] "Request Body" body=""
	I1014 19:42:45.522266  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:45.522677  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:46.021229  437269 type.go:168] "Request Body" body=""
	I1014 19:42:46.021324  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:46.021716  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:46.521270  437269 type.go:168] "Request Body" body=""
	I1014 19:42:46.521348  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:46.521722  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:47.021311  437269 type.go:168] "Request Body" body=""
	I1014 19:42:47.021390  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:47.021779  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:47.521354  437269 type.go:168] "Request Body" body=""
	I1014 19:42:47.521433  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:47.521823  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:47.521900  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:48.021360  437269 type.go:168] "Request Body" body=""
	I1014 19:42:48.021443  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:48.021837  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:48.521366  437269 type.go:168] "Request Body" body=""
	I1014 19:42:48.521469  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:48.521854  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:49.022003  437269 type.go:168] "Request Body" body=""
	I1014 19:42:49.022085  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:49.022428  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:49.522046  437269 type.go:168] "Request Body" body=""
	I1014 19:42:49.522124  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:49.522478  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:49.522562  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:50.021433  437269 type.go:168] "Request Body" body=""
	I1014 19:42:50.021542  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:50.021987  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:50.521590  437269 type.go:168] "Request Body" body=""
	I1014 19:42:50.521671  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:50.521991  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:51.021671  437269 type.go:168] "Request Body" body=""
	I1014 19:42:51.021778  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:51.022149  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:51.521719  437269 type.go:168] "Request Body" body=""
	I1014 19:42:51.521832  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:51.522215  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:52.021893  437269 type.go:168] "Request Body" body=""
	I1014 19:42:52.021987  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:52.022342  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:52.022411  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:52.522080  437269 type.go:168] "Request Body" body=""
	I1014 19:42:52.522183  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:52.522617  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:53.022238  437269 type.go:168] "Request Body" body=""
	I1014 19:42:53.022323  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:53.022716  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:53.521304  437269 type.go:168] "Request Body" body=""
	I1014 19:42:53.521390  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:53.521854  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:54.021685  437269 type.go:168] "Request Body" body=""
	I1014 19:42:54.021789  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:54.022166  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:54.521747  437269 type.go:168] "Request Body" body=""
	I1014 19:42:54.521851  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:54.522275  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:54.522352  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:55.022087  437269 type.go:168] "Request Body" body=""
	I1014 19:42:55.022177  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:55.022557  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:55.522187  437269 type.go:168] "Request Body" body=""
	I1014 19:42:55.522285  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:55.522718  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:56.021281  437269 type.go:168] "Request Body" body=""
	I1014 19:42:56.021383  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:56.021840  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:56.521354  437269 type.go:168] "Request Body" body=""
	I1014 19:42:56.521430  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:56.521815  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:57.021386  437269 type.go:168] "Request Body" body=""
	I1014 19:42:57.021483  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:57.021914  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:57.021999  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:57.521600  437269 type.go:168] "Request Body" body=""
	I1014 19:42:57.521687  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:57.522087  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:58.021700  437269 type.go:168] "Request Body" body=""
	I1014 19:42:58.021799  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:58.022207  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:58.521870  437269 type.go:168] "Request Body" body=""
	I1014 19:42:58.521949  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:58.522303  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:59.021970  437269 type.go:168] "Request Body" body=""
	I1014 19:42:59.022045  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:59.022443  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:59.022507  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:59.522038  437269 type.go:168] "Request Body" body=""
	I1014 19:42:59.522131  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:59.522484  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:00.021506  437269 type.go:168] "Request Body" body=""
	I1014 19:43:00.021597  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:00.021981  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:00.521539  437269 type.go:168] "Request Body" body=""
	I1014 19:43:00.521625  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:00.522005  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:01.021567  437269 type.go:168] "Request Body" body=""
	I1014 19:43:01.021646  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:01.022034  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:01.521607  437269 type.go:168] "Request Body" body=""
	I1014 19:43:01.521699  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:01.522086  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:01.522169  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:02.021674  437269 type.go:168] "Request Body" body=""
	I1014 19:43:02.021771  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:02.022118  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:02.521701  437269 type.go:168] "Request Body" body=""
	I1014 19:43:02.521802  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:02.522123  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:03.021671  437269 type.go:168] "Request Body" body=""
	I1014 19:43:03.021748  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:03.022117  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:03.521807  437269 type.go:168] "Request Body" body=""
	I1014 19:43:03.521898  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:03.522297  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:03.522377  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:04.021247  437269 type.go:168] "Request Body" body=""
	I1014 19:43:04.021333  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:04.021730  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:04.521290  437269 type.go:168] "Request Body" body=""
	I1014 19:43:04.521389  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:04.521814  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:05.021660  437269 type.go:168] "Request Body" body=""
	I1014 19:43:05.021743  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:05.022150  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:05.521749  437269 type.go:168] "Request Body" body=""
	I1014 19:43:05.521888  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:05.522240  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:06.021896  437269 type.go:168] "Request Body" body=""
	I1014 19:43:06.021996  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:06.022415  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:06.022501  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:06.522060  437269 type.go:168] "Request Body" body=""
	I1014 19:43:06.522142  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:06.522496  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:07.022152  437269 type.go:168] "Request Body" body=""
	I1014 19:43:07.022255  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:07.022672  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:07.521243  437269 type.go:168] "Request Body" body=""
	I1014 19:43:07.521325  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:07.521730  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:08.021306  437269 type.go:168] "Request Body" body=""
	I1014 19:43:08.021386  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:08.021797  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:08.521379  437269 type.go:168] "Request Body" body=""
	I1014 19:43:08.521475  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:08.521854  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:08.521921  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:09.021427  437269 type.go:168] "Request Body" body=""
	I1014 19:43:09.021525  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:09.021943  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:09.521610  437269 type.go:168] "Request Body" body=""
	I1014 19:43:09.521709  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:09.522074  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:10.021890  437269 type.go:168] "Request Body" body=""
	I1014 19:43:10.021973  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:10.022317  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:10.522040  437269 type.go:168] "Request Body" body=""
	I1014 19:43:10.522122  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:10.522464  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:10.522545  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:11.021678  437269 type.go:168] "Request Body" body=""
	I1014 19:43:11.021775  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:11.022124  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:11.521786  437269 type.go:168] "Request Body" body=""
	I1014 19:43:11.521865  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:11.522285  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:12.021630  437269 type.go:168] "Request Body" body=""
	I1014 19:43:12.021721  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:12.022083  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:12.521655  437269 type.go:168] "Request Body" body=""
	I1014 19:43:12.521751  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:12.522185  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:13.021857  437269 type.go:168] "Request Body" body=""
	I1014 19:43:13.021947  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:13.022329  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:13.022419  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:13.521998  437269 type.go:168] "Request Body" body=""
	I1014 19:43:13.522076  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:13.522428  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:14.022232  437269 type.go:168] "Request Body" body=""
	I1014 19:43:14.022315  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:14.022692  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:14.521299  437269 type.go:168] "Request Body" body=""
	I1014 19:43:14.521379  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:14.521818  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:15.021769  437269 type.go:168] "Request Body" body=""
	I1014 19:43:15.021869  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:15.022238  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:15.521883  437269 type.go:168] "Request Body" body=""
	I1014 19:43:15.521969  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:15.522302  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:15.522372  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:16.021990  437269 type.go:168] "Request Body" body=""
	I1014 19:43:16.022071  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:16.022459  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:16.522107  437269 type.go:168] "Request Body" body=""
	I1014 19:43:16.522190  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:16.522527  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:17.022255  437269 type.go:168] "Request Body" body=""
	I1014 19:43:17.022335  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:17.022728  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:17.521281  437269 type.go:168] "Request Body" body=""
	I1014 19:43:17.521369  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:17.521726  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:18.021392  437269 type.go:168] "Request Body" body=""
	I1014 19:43:18.021485  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:18.021932  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:18.022012  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:18.521618  437269 type.go:168] "Request Body" body=""
	I1014 19:43:18.521708  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:18.522112  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:19.021718  437269 type.go:168] "Request Body" body=""
	I1014 19:43:19.021829  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:19.022200  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:19.521926  437269 type.go:168] "Request Body" body=""
	I1014 19:43:19.522009  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:19.522391  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:20.021218  437269 type.go:168] "Request Body" body=""
	I1014 19:43:20.021308  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:20.021706  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:20.521306  437269 type.go:168] "Request Body" body=""
	I1014 19:43:20.521386  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:20.521816  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:20.521893  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:21.021342  437269 type.go:168] "Request Body" body=""
	I1014 19:43:21.021427  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:21.021835  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:21.521377  437269 type.go:168] "Request Body" body=""
	I1014 19:43:21.521483  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:21.521876  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:22.021433  437269 type.go:168] "Request Body" body=""
	I1014 19:43:22.021530  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:22.021848  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:22.521448  437269 type.go:168] "Request Body" body=""
	I1014 19:43:22.521550  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:22.521980  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:22.522047  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:23.021566  437269 type.go:168] "Request Body" body=""
	I1014 19:43:23.021671  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:23.022058  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:23.521627  437269 type.go:168] "Request Body" body=""
	I1014 19:43:23.521736  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:23.522126  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:24.022029  437269 type.go:168] "Request Body" body=""
	I1014 19:43:24.022121  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:24.022504  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:24.522205  437269 type.go:168] "Request Body" body=""
	I1014 19:43:24.522294  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:24.522686  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:24.522787  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:25.021717  437269 type.go:168] "Request Body" body=""
	I1014 19:43:25.021820  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:25.022213  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:25.521882  437269 type.go:168] "Request Body" body=""
	I1014 19:43:25.521969  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:25.522345  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:26.021966  437269 type.go:168] "Request Body" body=""
	I1014 19:43:26.022053  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:26.022395  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:26.522078  437269 type.go:168] "Request Body" body=""
	I1014 19:43:26.522167  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:26.522591  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:27.022256  437269 type.go:168] "Request Body" body=""
	I1014 19:43:27.022347  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:27.022787  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:27.022856  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:27.521335  437269 type.go:168] "Request Body" body=""
	I1014 19:43:27.521438  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:27.521885  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:28.021454  437269 type.go:168] "Request Body" body=""
	I1014 19:43:28.021560  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:28.021963  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:28.521548  437269 type.go:168] "Request Body" body=""
	I1014 19:43:28.521631  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:28.522049  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:29.021606  437269 type.go:168] "Request Body" body=""
	I1014 19:43:29.021709  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:29.022129  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:29.521791  437269 type.go:168] "Request Body" body=""
	I1014 19:43:29.521879  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:29.522325  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:29.522390  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:30.022166  437269 type.go:168] "Request Body" body=""
	I1014 19:43:30.022260  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:30.022687  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:30.522272  437269 type.go:168] "Request Body" body=""
	I1014 19:43:30.522355  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:30.522747  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:31.021385  437269 type.go:168] "Request Body" body=""
	I1014 19:43:31.021484  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:31.021909  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:31.521491  437269 type.go:168] "Request Body" body=""
	I1014 19:43:31.521578  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:31.522023  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:32.021606  437269 type.go:168] "Request Body" body=""
	I1014 19:43:32.021692  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:32.022091  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:32.022172  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:32.521661  437269 type.go:168] "Request Body" body=""
	I1014 19:43:32.521740  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:32.522158  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:33.021717  437269 type.go:168] "Request Body" body=""
	I1014 19:43:33.021815  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:33.022209  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:33.521885  437269 type.go:168] "Request Body" body=""
	I1014 19:43:33.521973  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:33.522384  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:34.021211  437269 type.go:168] "Request Body" body=""
	I1014 19:43:34.021293  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:34.021699  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:34.521252  437269 type.go:168] "Request Body" body=""
	I1014 19:43:34.521332  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:34.521740  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:34.521854  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:35.021628  437269 type.go:168] "Request Body" body=""
	I1014 19:43:35.021734  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:35.022103  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:35.521777  437269 type.go:168] "Request Body" body=""
	I1014 19:43:35.521861  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:35.522282  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:36.021901  437269 type.go:168] "Request Body" body=""
	I1014 19:43:36.021991  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:36.022338  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:36.522081  437269 type.go:168] "Request Body" body=""
	I1014 19:43:36.522161  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:36.522532  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:36.522600  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:37.022222  437269 type.go:168] "Request Body" body=""
	I1014 19:43:37.022306  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:37.022680  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:37.521261  437269 type.go:168] "Request Body" body=""
	I1014 19:43:37.521365  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:37.521784  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:38.021342  437269 type.go:168] "Request Body" body=""
	I1014 19:43:38.021427  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:38.021897  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:38.521489  437269 type.go:168] "Request Body" body=""
	I1014 19:43:38.521583  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:38.521930  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:39.021573  437269 type.go:168] "Request Body" body=""
	I1014 19:43:39.021673  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:39.022106  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:39.022190  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:39.521695  437269 type.go:168] "Request Body" body=""
	I1014 19:43:39.521806  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:39.522190  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:40.022070  437269 type.go:168] "Request Body" body=""
	I1014 19:43:40.022155  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:40.022515  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:40.522191  437269 type.go:168] "Request Body" body=""
	I1014 19:43:40.522278  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:40.522665  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:41.021264  437269 type.go:168] "Request Body" body=""
	I1014 19:43:41.021347  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:41.021730  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:41.521285  437269 type.go:168] "Request Body" body=""
	I1014 19:43:41.521368  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:41.521747  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:41.521850  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:42.021332  437269 type.go:168] "Request Body" body=""
	I1014 19:43:42.021413  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:42.021835  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:42.521390  437269 type.go:168] "Request Body" body=""
	I1014 19:43:42.521492  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:42.521872  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:43.021448  437269 type.go:168] "Request Body" body=""
	I1014 19:43:43.021551  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:43.021984  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:43.521527  437269 type.go:168] "Request Body" body=""
	I1014 19:43:43.521610  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:43.521979  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:43.522054  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:44.021891  437269 type.go:168] "Request Body" body=""
	I1014 19:43:44.021982  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:44.022346  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:44.522015  437269 type.go:168] "Request Body" body=""
	I1014 19:43:44.522103  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:44.522480  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:45.021474  437269 type.go:168] "Request Body" body=""
	I1014 19:43:45.021561  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:45.021945  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:45.521543  437269 type.go:168] "Request Body" body=""
	I1014 19:43:45.521646  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:45.522059  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:45.522127  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:46.021638  437269 type.go:168] "Request Body" body=""
	I1014 19:43:46.021729  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:46.022191  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:46.521736  437269 type.go:168] "Request Body" body=""
	I1014 19:43:46.521839  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:46.522226  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:47.021891  437269 type.go:168] "Request Body" body=""
	I1014 19:43:47.021986  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:47.022382  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:47.522067  437269 type.go:168] "Request Body" body=""
	I1014 19:43:47.522151  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:47.522552  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:47.522621  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:48.022193  437269 type.go:168] "Request Body" body=""
	I1014 19:43:48.022285  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:48.022636  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:48.521224  437269 type.go:168] "Request Body" body=""
	I1014 19:43:48.521322  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:48.521716  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:49.021262  437269 type.go:168] "Request Body" body=""
	I1014 19:43:49.021340  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:49.021716  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:49.521334  437269 type.go:168] "Request Body" body=""
	I1014 19:43:49.521413  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:49.521823  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:50.021743  437269 type.go:168] "Request Body" body=""
	I1014 19:43:50.021874  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:50.022283  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:50.022349  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:50.521963  437269 type.go:168] "Request Body" body=""
	I1014 19:43:50.522049  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:50.522461  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:51.022176  437269 type.go:168] "Request Body" body=""
	I1014 19:43:51.022266  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:51.022629  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:51.522282  437269 type.go:168] "Request Body" body=""
	I1014 19:43:51.522383  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:51.522865  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:52.021416  437269 type.go:168] "Request Body" body=""
	I1014 19:43:52.021507  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:52.021884  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:52.521517  437269 type.go:168] "Request Body" body=""
	I1014 19:43:52.521611  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:52.522082  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:52.522155  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:53.021656  437269 type.go:168] "Request Body" body=""
	I1014 19:43:53.021742  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:53.022136  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:53.521806  437269 type.go:168] "Request Body" body=""
	I1014 19:43:53.521891  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:53.522261  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:54.022341  437269 type.go:168] "Request Body" body=""
	I1014 19:43:54.022440  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:54.022890  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:54.521448  437269 type.go:168] "Request Body" body=""
	I1014 19:43:54.521552  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:54.521966  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:55.021854  437269 type.go:168] "Request Body" body=""
	I1014 19:43:55.021934  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:55.022336  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:55.022402  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:55.521987  437269 type.go:168] "Request Body" body=""
	I1014 19:43:55.522071  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:55.522460  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:56.022232  437269 type.go:168] "Request Body" body=""
	I1014 19:43:56.022316  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:56.022653  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:56.521227  437269 type.go:168] "Request Body" body=""
	I1014 19:43:56.521302  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:56.521701  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:57.021269  437269 type.go:168] "Request Body" body=""
	I1014 19:43:57.021349  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:57.021719  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:57.521302  437269 type.go:168] "Request Body" body=""
	I1014 19:43:57.521398  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:57.521838  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:57.521899  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:58.021391  437269 type.go:168] "Request Body" body=""
	I1014 19:43:58.021485  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:58.021875  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:58.521454  437269 type.go:168] "Request Body" body=""
	I1014 19:43:58.521550  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:58.521987  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:59.021602  437269 type.go:168] "Request Body" body=""
	I1014 19:43:59.021701  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:59.022089  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:59.521704  437269 type.go:168] "Request Body" body=""
	I1014 19:43:59.521805  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:59.522205  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:59.522272  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:00.022040  437269 type.go:168] "Request Body" body=""
	I1014 19:44:00.022132  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:00.022504  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:00.522200  437269 type.go:168] "Request Body" body=""
	I1014 19:44:00.522297  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:00.522735  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:01.021297  437269 type.go:168] "Request Body" body=""
	I1014 19:44:01.021387  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:01.021784  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:01.521307  437269 type.go:168] "Request Body" body=""
	I1014 19:44:01.521399  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:01.521850  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:02.021406  437269 type.go:168] "Request Body" body=""
	I1014 19:44:02.021500  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:02.021877  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:02.021945  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:02.521436  437269 type.go:168] "Request Body" body=""
	I1014 19:44:02.521539  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:02.521953  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:03.021516  437269 type.go:168] "Request Body" body=""
	I1014 19:44:03.021598  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:03.022005  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:03.521561  437269 type.go:168] "Request Body" body=""
	I1014 19:44:03.521646  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:03.522077  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:04.021994  437269 type.go:168] "Request Body" body=""
	I1014 19:44:04.022079  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:04.022499  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:04.022572  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:04.522163  437269 type.go:168] "Request Body" body=""
	I1014 19:44:04.522255  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:04.522672  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:05.021565  437269 type.go:168] "Request Body" body=""
	I1014 19:44:05.021656  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:05.022053  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:05.521629  437269 type.go:168] "Request Body" body=""
	I1014 19:44:05.521713  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:05.522128  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:06.021689  437269 type.go:168] "Request Body" body=""
	I1014 19:44:06.021801  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:06.022188  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:06.521851  437269 type.go:168] "Request Body" body=""
	I1014 19:44:06.521937  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:06.522347  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:06.522417  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:07.022007  437269 type.go:168] "Request Body" body=""
	I1014 19:44:07.022086  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:07.022436  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:07.522203  437269 type.go:168] "Request Body" body=""
	I1014 19:44:07.522282  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:07.522638  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:08.021309  437269 type.go:168] "Request Body" body=""
	I1014 19:44:08.021397  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:08.021803  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:08.521985  437269 type.go:168] "Request Body" body=""
	I1014 19:44:08.522062  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:08.522422  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:08.522484  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:09.022109  437269 type.go:168] "Request Body" body=""
	I1014 19:44:09.022199  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:09.022550  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:09.522226  437269 type.go:168] "Request Body" body=""
	I1014 19:44:09.522312  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:09.522687  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:10.021566  437269 type.go:168] "Request Body" body=""
	I1014 19:44:10.021708  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:10.022064  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:10.521657  437269 type.go:168] "Request Body" body=""
	I1014 19:44:10.521776  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:10.522143  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:11.021701  437269 type.go:168] "Request Body" body=""
	I1014 19:44:11.021797  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:11.022127  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:11.022194  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:11.521807  437269 type.go:168] "Request Body" body=""
	I1014 19:44:11.521884  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:11.522263  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:12.021962  437269 type.go:168] "Request Body" body=""
	I1014 19:44:12.022049  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:12.022424  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:12.522133  437269 type.go:168] "Request Body" body=""
	I1014 19:44:12.522233  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:12.522615  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:13.022268  437269 type.go:168] "Request Body" body=""
	I1014 19:44:13.022358  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:13.022774  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:13.022845  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:13.521351  437269 type.go:168] "Request Body" body=""
	I1014 19:44:13.521431  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:13.521806  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:14.021818  437269 type.go:168] "Request Body" body=""
	I1014 19:44:14.021912  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:14.022342  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:14.522064  437269 type.go:168] "Request Body" body=""
	I1014 19:44:14.522156  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:14.522518  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:15.021381  437269 type.go:168] "Request Body" body=""
	I1014 19:44:15.021468  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:15.021826  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:15.521382  437269 type.go:168] "Request Body" body=""
	I1014 19:44:15.521487  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:15.521856  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:15.521934  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:16.021382  437269 type.go:168] "Request Body" body=""
	I1014 19:44:16.021472  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:16.021855  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:16.521402  437269 type.go:168] "Request Body" body=""
	I1014 19:44:16.521496  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:16.521958  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:17.021537  437269 type.go:168] "Request Body" body=""
	I1014 19:44:17.021618  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:17.022006  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:17.521572  437269 type.go:168] "Request Body" body=""
	I1014 19:44:17.521652  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:17.522068  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:17.522135  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:18.021636  437269 type.go:168] "Request Body" body=""
	I1014 19:44:18.021735  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:18.022112  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:18.521664  437269 type.go:168] "Request Body" body=""
	I1014 19:44:18.521790  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:18.522173  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:19.021791  437269 type.go:168] "Request Body" body=""
	I1014 19:44:19.021887  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:19.022264  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:19.521890  437269 type.go:168] "Request Body" body=""
	I1014 19:44:19.521989  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:19.522366  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:19.522432  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:20.022234  437269 type.go:168] "Request Body" body=""
	I1014 19:44:20.022313  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:20.022654  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:20.521239  437269 type.go:168] "Request Body" body=""
	I1014 19:44:20.521321  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:20.521737  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:21.021357  437269 type.go:168] "Request Body" body=""
	I1014 19:44:21.021447  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:21.021856  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:21.521454  437269 type.go:168] "Request Body" body=""
	I1014 19:44:21.521555  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:21.521969  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:22.021534  437269 type.go:168] "Request Body" body=""
	I1014 19:44:22.021630  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:22.022029  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:22.022098  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:22.521619  437269 type.go:168] "Request Body" body=""
	I1014 19:44:22.521729  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:22.522128  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:23.021712  437269 type.go:168] "Request Body" body=""
	I1014 19:44:23.021820  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:23.022176  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:23.521802  437269 type.go:168] "Request Body" body=""
	I1014 19:44:23.521885  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:23.522258  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:24.022112  437269 type.go:168] "Request Body" body=""
	I1014 19:44:24.022201  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:24.022532  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:24.022600  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:24.522195  437269 type.go:168] "Request Body" body=""
	I1014 19:44:24.522287  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:24.522634  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:25.021596  437269 type.go:168] "Request Body" body=""
	I1014 19:44:25.021676  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:25.022088  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:25.521654  437269 type.go:168] "Request Body" body=""
	I1014 19:44:25.521741  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:25.522131  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:26.021684  437269 type.go:168] "Request Body" body=""
	I1014 19:44:26.021798  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:26.022168  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:26.521801  437269 type.go:168] "Request Body" body=""
	I1014 19:44:26.521880  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:26.522232  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:26.522299  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:27.021847  437269 type.go:168] "Request Body" body=""
	I1014 19:44:27.021933  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:27.022292  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:27.521878  437269 type.go:168] "Request Body" body=""
	I1014 19:44:27.521963  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:27.522328  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:28.021519  437269 type.go:168] "Request Body" body=""
	I1014 19:44:28.021599  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:28.021968  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:28.521573  437269 type.go:168] "Request Body" body=""
	I1014 19:44:28.521667  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:28.522077  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:29.021709  437269 type.go:168] "Request Body" body=""
	I1014 19:44:29.021839  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:29.022235  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:29.022308  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:29.521910  437269 type.go:168] "Request Body" body=""
	I1014 19:44:29.522006  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:29.522371  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:30.021252  437269 type.go:168] "Request Body" body=""
	I1014 19:44:30.021348  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:30.021744  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:30.521308  437269 type.go:168] "Request Body" body=""
	I1014 19:44:30.521407  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:30.521858  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:31.021447  437269 type.go:168] "Request Body" body=""
	I1014 19:44:31.021537  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:31.021993  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:31.521577  437269 type.go:168] "Request Body" body=""
	I1014 19:44:31.521661  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:31.522091  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:31.522171  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:32.021679  437269 type.go:168] "Request Body" body=""
	I1014 19:44:32.021778  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:32.022180  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:32.521862  437269 type.go:168] "Request Body" body=""
	I1014 19:44:32.521962  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:32.522305  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:33.022031  437269 type.go:168] "Request Body" body=""
	I1014 19:44:33.022124  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:33.022484  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:33.522216  437269 type.go:168] "Request Body" body=""
	I1014 19:44:33.522294  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:33.522643  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:33.522730  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:34.021707  437269 type.go:168] "Request Body" body=""
	I1014 19:44:34.021853  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:34.022332  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:34.522025  437269 type.go:168] "Request Body" body=""
	I1014 19:44:34.522147  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:34.522536  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:35.021511  437269 type.go:168] "Request Body" body=""
	I1014 19:44:35.021620  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:35.022043  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:35.522236  437269 type.go:168] "Request Body" body=""
	I1014 19:44:35.522316  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:35.522681  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:36.021229  437269 type.go:168] "Request Body" body=""
	I1014 19:44:36.021313  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:36.021734  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:36.021830  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:36.521316  437269 type.go:168] "Request Body" body=""
	I1014 19:44:36.521393  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:36.521798  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:37.021352  437269 type.go:168] "Request Body" body=""
	I1014 19:44:37.021434  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:37.021888  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:37.521479  437269 type.go:168] "Request Body" body=""
	I1014 19:44:37.521566  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:37.521949  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:38.021522  437269 type.go:168] "Request Body" body=""
	I1014 19:44:38.021608  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:38.022020  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:38.022085  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:38.521582  437269 type.go:168] "Request Body" body=""
	I1014 19:44:38.521671  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:38.522063  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:39.021622  437269 type.go:168] "Request Body" body=""
	I1014 19:44:39.021702  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:39.022125  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:39.521740  437269 type.go:168] "Request Body" body=""
	I1014 19:44:39.521841  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:39.522231  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:40.022072  437269 type.go:168] "Request Body" body=""
	I1014 19:44:40.022157  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:40.022496  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:40.022560  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:40.522145  437269 type.go:168] "Request Body" body=""
	I1014 19:44:40.522230  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:40.522581  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:41.021191  437269 type.go:168] "Request Body" body=""
	I1014 19:44:41.021271  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:41.021663  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:41.521242  437269 type.go:168] "Request Body" body=""
	I1014 19:44:41.521325  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:41.521677  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:42.021221  437269 type.go:168] "Request Body" body=""
	I1014 19:44:42.021300  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:42.021721  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:42.521295  437269 type.go:168] "Request Body" body=""
	I1014 19:44:42.521377  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:42.521793  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:42.521860  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:43.021377  437269 type.go:168] "Request Body" body=""
	I1014 19:44:43.021470  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:43.021882  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:43.521445  437269 type.go:168] "Request Body" body=""
	I1014 19:44:43.521535  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:43.521905  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:44.021811  437269 type.go:168] "Request Body" body=""
	I1014 19:44:44.021903  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:44.022312  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:44.521977  437269 type.go:168] "Request Body" body=""
	I1014 19:44:44.522062  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:44.522405  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:44.522472  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:45.021229  437269 type.go:168] "Request Body" body=""
	I1014 19:44:45.021316  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:45.021700  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:45.521363  437269 type.go:168] "Request Body" body=""
	I1014 19:44:45.521476  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:45.521862  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:46.021400  437269 type.go:168] "Request Body" body=""
	I1014 19:44:46.021493  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:46.021898  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:46.521589  437269 type.go:168] "Request Body" body=""
	I1014 19:44:46.521682  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:46.522048  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:47.021649  437269 type.go:168] "Request Body" body=""
	I1014 19:44:47.021730  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:47.022119  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:47.022190  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:47.521670  437269 type.go:168] "Request Body" body=""
	I1014 19:44:47.521746  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:47.522086  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:48.021745  437269 type.go:168] "Request Body" body=""
	I1014 19:44:48.021850  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:48.022200  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:48.521828  437269 type.go:168] "Request Body" body=""
	I1014 19:44:48.521908  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:48.522263  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:49.021930  437269 type.go:168] "Request Body" body=""
	I1014 19:44:49.022025  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:49.022391  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:49.022471  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:49.522012  437269 type.go:168] "Request Body" body=""
	I1014 19:44:49.522093  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:49.522436  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:50.021280  437269 type.go:168] "Request Body" body=""
	I1014 19:44:50.021359  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:50.021746  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:50.521299  437269 type.go:168] "Request Body" body=""
	I1014 19:44:50.521381  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:50.521749  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:51.021292  437269 type.go:168] "Request Body" body=""
	I1014 19:44:51.021375  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:51.021830  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:51.521389  437269 type.go:168] "Request Body" body=""
	I1014 19:44:51.521483  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:51.521862  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:51.521938  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:52.021392  437269 type.go:168] "Request Body" body=""
	I1014 19:44:52.021501  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:52.021933  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:52.521524  437269 type.go:168] "Request Body" body=""
	I1014 19:44:52.521606  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:52.522002  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:53.021549  437269 type.go:168] "Request Body" body=""
	I1014 19:44:53.021630  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:53.022033  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:53.521638  437269 type.go:168] "Request Body" body=""
	I1014 19:44:53.521719  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:53.522129  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:53.522202  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:54.022063  437269 type.go:168] "Request Body" body=""
	I1014 19:44:54.022155  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:54.022563  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:54.522249  437269 type.go:168] "Request Body" body=""
	I1014 19:44:54.522346  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:54.522749  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:55.021666  437269 type.go:168] "Request Body" body=""
	I1014 19:44:55.021750  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:55.022126  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:55.521738  437269 type.go:168] "Request Body" body=""
	I1014 19:44:55.521847  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:55.522237  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:55.522304  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:56.021875  437269 type.go:168] "Request Body" body=""
	I1014 19:44:56.021958  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:56.022317  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:56.521953  437269 type.go:168] "Request Body" body=""
	I1014 19:44:56.522031  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:56.522402  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:57.022099  437269 type.go:168] "Request Body" body=""
	I1014 19:44:57.022184  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:57.022571  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:57.522215  437269 type.go:168] "Request Body" body=""
	I1014 19:44:57.522295  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:57.522635  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:57.522721  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:58.021247  437269 type.go:168] "Request Body" body=""
	I1014 19:44:58.021331  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:58.021778  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:58.521330  437269 type.go:168] "Request Body" body=""
	I1014 19:44:58.521406  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:58.521792  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:59.021307  437269 type.go:168] "Request Body" body=""
	I1014 19:44:59.021390  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:59.021783  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:59.521317  437269 type.go:168] "Request Body" body=""
	I1014 19:44:59.521404  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:59.521833  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:00.021727  437269 type.go:168] "Request Body" body=""
	I1014 19:45:00.021828  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:00.022220  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:00.022290  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:00.521874  437269 type.go:168] "Request Body" body=""
	I1014 19:45:00.521969  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:00.522342  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:01.022108  437269 type.go:168] "Request Body" body=""
	I1014 19:45:01.022195  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:01.022598  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:01.521221  437269 type.go:168] "Request Body" body=""
	I1014 19:45:01.521312  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:01.521684  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:02.021247  437269 type.go:168] "Request Body" body=""
	I1014 19:45:02.021345  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:02.021741  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:02.521281  437269 type.go:168] "Request Body" body=""
	I1014 19:45:02.521368  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:02.521783  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:02.521850  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:03.021427  437269 type.go:168] "Request Body" body=""
	I1014 19:45:03.021538  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:03.022017  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:03.521576  437269 type.go:168] "Request Body" body=""
	I1014 19:45:03.521665  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:03.522065  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:04.021968  437269 type.go:168] "Request Body" body=""
	I1014 19:45:04.022064  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:04.022412  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:04.522089  437269 type.go:168] "Request Body" body=""
	I1014 19:45:04.522186  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:04.522588  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:04.522669  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:05.021532  437269 type.go:168] "Request Body" body=""
	I1014 19:45:05.021627  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:05.022032  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:05.521660  437269 type.go:168] "Request Body" body=""
	I1014 19:45:05.521743  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:05.522144  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:06.021836  437269 type.go:168] "Request Body" body=""
	I1014 19:45:06.021915  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:06.022313  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:06.522006  437269 type.go:168] "Request Body" body=""
	I1014 19:45:06.522090  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:06.522505  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:07.022194  437269 type.go:168] "Request Body" body=""
	I1014 19:45:07.022282  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:07.022657  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:07.022726  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:07.522255  437269 type.go:168] "Request Body" body=""
	I1014 19:45:07.522341  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:07.522733  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:08.021293  437269 type.go:168] "Request Body" body=""
	I1014 19:45:08.021376  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:08.021784  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:08.521329  437269 type.go:168] "Request Body" body=""
	I1014 19:45:08.521407  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:08.521815  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:09.021335  437269 type.go:168] "Request Body" body=""
	I1014 19:45:09.021426  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:09.021821  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:09.521354  437269 type.go:168] "Request Body" body=""
	I1014 19:45:09.521433  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:09.521870  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:09.521948  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:10.021750  437269 type.go:168] "Request Body" body=""
	I1014 19:45:10.021864  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:10.022248  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:10.521887  437269 type.go:168] "Request Body" body=""
	I1014 19:45:10.521973  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:10.522362  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:11.022015  437269 type.go:168] "Request Body" body=""
	I1014 19:45:11.022096  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:11.022432  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:11.522073  437269 type.go:168] "Request Body" body=""
	I1014 19:45:11.522158  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:11.522547  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:11.522623  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:12.022259  437269 type.go:168] "Request Body" body=""
	I1014 19:45:12.022347  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:12.022850  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:12.521359  437269 type.go:168] "Request Body" body=""
	I1014 19:45:12.521448  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:12.521849  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:13.021409  437269 type.go:168] "Request Body" body=""
	I1014 19:45:13.021494  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:13.021916  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:13.521532  437269 type.go:168] "Request Body" body=""
	I1014 19:45:13.521618  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:13.521981  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:14.021969  437269 type.go:168] "Request Body" body=""
	I1014 19:45:14.022049  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:14.022447  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:14.022510  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:14.522094  437269 type.go:168] "Request Body" body=""
	I1014 19:45:14.522176  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:14.522545  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:15.021509  437269 type.go:168] "Request Body" body=""
	I1014 19:45:15.021606  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:15.022033  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:15.521593  437269 type.go:168] "Request Body" body=""
	I1014 19:45:15.521690  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:15.522096  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:16.021646  437269 type.go:168] "Request Body" body=""
	I1014 19:45:16.021736  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:16.022135  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:16.521804  437269 type.go:168] "Request Body" body=""
	I1014 19:45:16.521890  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:16.522248  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:16.522324  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:17.021975  437269 type.go:168] "Request Body" body=""
	I1014 19:45:17.022056  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:17.022447  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:17.522108  437269 type.go:168] "Request Body" body=""
	I1014 19:45:17.522191  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:17.522594  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:18.022251  437269 type.go:168] "Request Body" body=""
	I1014 19:45:18.022333  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:18.022725  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:18.521289  437269 type.go:168] "Request Body" body=""
	I1014 19:45:18.521376  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:18.521812  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:19.021383  437269 type.go:168] "Request Body" body=""
	I1014 19:45:19.021484  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:19.021904  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:19.021980  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:19.521516  437269 type.go:168] "Request Body" body=""
	I1014 19:45:19.521604  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:19.522056  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:20.021651  437269 type.go:168] "Request Body" body=""
	I1014 19:45:20.021732  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:20.022182  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:20.521732  437269 type.go:168] "Request Body" body=""
	I1014 19:45:20.521838  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:20.522198  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:21.021907  437269 type.go:168] "Request Body" body=""
	I1014 19:45:21.021987  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:21.022351  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:21.022430  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:21.521976  437269 type.go:168] "Request Body" body=""
	I1014 19:45:21.522056  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:21.522417  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:22.022086  437269 type.go:168] "Request Body" body=""
	I1014 19:45:22.022170  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:22.022544  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:22.522193  437269 type.go:168] "Request Body" body=""
	I1014 19:45:22.522282  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:22.522668  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:23.021253  437269 type.go:168] "Request Body" body=""
	I1014 19:45:23.021333  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:23.021784  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:23.521356  437269 type.go:168] "Request Body" body=""
	I1014 19:45:23.521450  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:23.521977  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:23.522059  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:24.021741  437269 type.go:168] "Request Body" body=""
	I1014 19:45:24.021842  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:24.022224  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:24.521890  437269 type.go:168] "Request Body" body=""
	I1014 19:45:24.521984  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:24.522357  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:25.022258  437269 type.go:168] "Request Body" body=""
	I1014 19:45:25.022360  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:25.022739  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:25.521985  437269 type.go:168] "Request Body" body=""
	I1014 19:45:25.522068  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:25.522428  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:25.522491  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:26.022071  437269 type.go:168] "Request Body" body=""
	I1014 19:45:26.022170  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:26.022519  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:26.521198  437269 type.go:168] "Request Body" body=""
	I1014 19:45:26.521288  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:26.521676  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:27.021978  437269 type.go:168] "Request Body" body=""
	I1014 19:45:27.022083  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:27.022419  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:27.522151  437269 type.go:168] "Request Body" body=""
	I1014 19:45:27.522230  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:27.522643  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:27.522714  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:28.021218  437269 type.go:168] "Request Body" body=""
	I1014 19:45:28.021312  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:28.021730  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:28.521312  437269 type.go:168] "Request Body" body=""
	I1014 19:45:28.521403  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:28.521840  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:29.021354  437269 type.go:168] "Request Body" body=""
	I1014 19:45:29.021435  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:29.021854  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:29.521378  437269 type.go:168] "Request Body" body=""
	I1014 19:45:29.521458  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:29.521850  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:30.021662  437269 type.go:168] "Request Body" body=""
	I1014 19:45:30.021789  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:30.022146  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:30.022213  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:30.521738  437269 type.go:168] "Request Body" body=""
	I1014 19:45:30.521833  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:30.522211  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:31.021880  437269 type.go:168] "Request Body" body=""
	I1014 19:45:31.021993  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:31.022332  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:31.522123  437269 type.go:168] "Request Body" body=""
	I1014 19:45:31.522204  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:31.522575  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:32.022205  437269 type.go:168] "Request Body" body=""
	I1014 19:45:32.022295  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:32.022647  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:32.022725  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:32.521198  437269 type.go:168] "Request Body" body=""
	I1014 19:45:32.521290  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:32.521668  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:33.021206  437269 type.go:168] "Request Body" body=""
	I1014 19:45:33.021284  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:33.021669  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:33.521252  437269 type.go:168] "Request Body" body=""
	I1014 19:45:33.521335  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:33.521732  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:34.021648  437269 type.go:168] "Request Body" body=""
	I1014 19:45:34.021738  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:34.022124  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:34.521677  437269 type.go:168] "Request Body" body=""
	I1014 19:45:34.521786  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:34.522167  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:34.522228  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:35.021984  437269 type.go:168] "Request Body" body=""
	I1014 19:45:35.022074  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:35.022422  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:35.522074  437269 type.go:168] "Request Body" body=""
	I1014 19:45:35.522161  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:35.522560  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:36.022246  437269 type.go:168] "Request Body" body=""
	I1014 19:45:36.022332  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:36.022735  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:36.521326  437269 type.go:168] "Request Body" body=""
	I1014 19:45:36.521412  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:36.521843  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:37.021388  437269 type.go:168] "Request Body" body=""
	I1014 19:45:37.021485  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:37.021891  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:37.021957  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:37.521503  437269 type.go:168] "Request Body" body=""
	I1014 19:45:37.521585  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:37.522005  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:38.021579  437269 type.go:168] "Request Body" body=""
	I1014 19:45:38.021679  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:38.022059  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:38.521663  437269 type.go:168] "Request Body" body=""
	I1014 19:45:38.521751  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:38.522160  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:39.021909  437269 type.go:168] "Request Body" body=""
	I1014 19:45:39.021996  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:39.022378  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:39.022449  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:39.522030  437269 type.go:168] "Request Body" body=""
	I1014 19:45:39.522107  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:39.522416  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:40.021388  437269 type.go:168] "Request Body" body=""
	I1014 19:45:40.021481  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:40.021844  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:40.521422  437269 type.go:168] "Request Body" body=""
	I1014 19:45:40.521523  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:40.521966  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:41.021564  437269 type.go:168] "Request Body" body=""
	I1014 19:45:41.021641  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:41.022031  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:41.521648  437269 type.go:168] "Request Body" body=""
	I1014 19:45:41.521734  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:41.522167  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:41.522236  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:42.021731  437269 type.go:168] "Request Body" body=""
	I1014 19:45:42.021836  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:42.022192  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:42.521731  437269 type.go:168] "Request Body" body=""
	I1014 19:45:42.521839  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:42.522217  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:43.021906  437269 type.go:168] "Request Body" body=""
	I1014 19:45:43.021993  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:43.022331  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:43.522111  437269 type.go:168] "Request Body" body=""
	I1014 19:45:43.522198  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:43.522589  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:43.522675  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:44.021291  437269 type.go:168] "Request Body" body=""
	I1014 19:45:44.021372  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:44.021800  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:44.521363  437269 type.go:168] "Request Body" body=""
	I1014 19:45:44.521449  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:44.521869  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:45.021752  437269 type.go:168] "Request Body" body=""
	I1014 19:45:45.021850  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:45.022233  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:45.521855  437269 type.go:168] "Request Body" body=""
	I1014 19:45:45.521941  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:45.522316  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:46.022006  437269 type.go:168] "Request Body" body=""
	I1014 19:45:46.022095  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:46.022499  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:46.022579  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:46.522210  437269 type.go:168] "Request Body" body=""
	I1014 19:45:46.522318  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:46.522722  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:47.021283  437269 type.go:168] "Request Body" body=""
	I1014 19:45:47.021385  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:47.021781  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:47.521429  437269 type.go:168] "Request Body" body=""
	I1014 19:45:47.521536  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:47.521995  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:48.021575  437269 type.go:168] "Request Body" body=""
	I1014 19:45:48.021686  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:48.022099  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:48.521787  437269 type.go:168] "Request Body" body=""
	I1014 19:45:48.521871  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:48.522261  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:48.522369  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:49.021944  437269 type.go:168] "Request Body" body=""
	I1014 19:45:49.022027  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:49.022513  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:49.522168  437269 type.go:168] "Request Body" body=""
	I1014 19:45:49.522247  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:49.522598  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:50.021501  437269 type.go:168] "Request Body" body=""
	I1014 19:45:50.021615  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:50.022004  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:50.521581  437269 type.go:168] "Request Body" body=""
	I1014 19:45:50.521669  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:50.522045  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:51.021656  437269 type.go:168] "Request Body" body=""
	I1014 19:45:51.021788  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:51.022144  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:51.022212  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:51.521847  437269 type.go:168] "Request Body" body=""
	I1014 19:45:51.521925  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:51.522299  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:52.022088  437269 type.go:168] "Request Body" body=""
	I1014 19:45:52.022197  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:52.022587  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:52.522247  437269 type.go:168] "Request Body" body=""
	I1014 19:45:52.522330  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:52.522658  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:53.021334  437269 type.go:168] "Request Body" body=""
	I1014 19:45:53.021438  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:53.021860  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:53.521371  437269 type.go:168] "Request Body" body=""
	I1014 19:45:53.521458  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:53.521812  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:53.521887  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:54.021737  437269 type.go:168] "Request Body" body=""
	I1014 19:45:54.021853  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:54.022236  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:54.521871  437269 type.go:168] "Request Body" body=""
	I1014 19:45:54.521952  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:54.522300  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:55.022188  437269 type.go:168] "Request Body" body=""
	I1014 19:45:55.022267  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:55.022698  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:55.521299  437269 type.go:168] "Request Body" body=""
	I1014 19:45:55.521387  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:55.521745  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:56.021324  437269 type.go:168] "Request Body" body=""
	I1014 19:45:56.021405  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:56.021853  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:56.021933  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:56.521381  437269 type.go:168] "Request Body" body=""
	I1014 19:45:56.521492  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:56.521856  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:57.021449  437269 type.go:168] "Request Body" body=""
	I1014 19:45:57.021569  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:57.022053  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:57.521631  437269 type.go:168] "Request Body" body=""
	I1014 19:45:57.521711  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:57.522096  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:58.021695  437269 type.go:168] "Request Body" body=""
	I1014 19:45:58.021812  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:58.022220  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:58.022300  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:58.521874  437269 type.go:168] "Request Body" body=""
	I1014 19:45:58.521965  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:58.522333  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:59.021991  437269 type.go:168] "Request Body" body=""
	I1014 19:45:59.022083  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:59.022475  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:59.522167  437269 type.go:168] "Request Body" body=""
	I1014 19:45:59.522245  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:59.522597  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:00.021599  437269 type.go:168] "Request Body" body=""
	I1014 19:46:00.021701  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:00.022127  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:00.521743  437269 type.go:168] "Request Body" body=""
	I1014 19:46:00.521861  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:00.522238  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:46:00.522338  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:46:01.022015  437269 type.go:168] "Request Body" body=""
	I1014 19:46:01.022109  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:01.022496  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:01.522199  437269 type.go:168] "Request Body" body=""
	I1014 19:46:01.522284  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:01.522792  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:02.021313  437269 type.go:168] "Request Body" body=""
	I1014 19:46:02.021414  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:02.021802  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:02.521355  437269 type.go:168] "Request Body" body=""
	I1014 19:46:02.521435  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:02.521837  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:03.021400  437269 type.go:168] "Request Body" body=""
	I1014 19:46:03.021512  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:03.021843  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:46:03.021936  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:46:03.521495  437269 type.go:168] "Request Body" body=""
	I1014 19:46:03.521638  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:03.522055  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:04.022126  437269 type.go:168] "Request Body" body=""
	I1014 19:46:04.022216  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:04.022594  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:04.522216  437269 type.go:168] "Request Body" body=""
	I1014 19:46:04.522303  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:04.522679  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:05.021591  437269 type.go:168] "Request Body" body=""
	I1014 19:46:05.021704  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:05.022095  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:46:05.022161  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:46:05.521689  437269 type.go:168] "Request Body" body=""
	I1014 19:46:05.521808  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:05.522192  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:06.021790  437269 type.go:168] "Request Body" body=""
	I1014 19:46:06.021897  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:06.022280  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:06.521951  437269 type.go:168] "Request Body" body=""
	I1014 19:46:06.522040  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:06.522397  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:07.022069  437269 type.go:168] "Request Body" body=""
	I1014 19:46:07.022173  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:07.022542  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:46:07.022606  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:46:07.522218  437269 type.go:168] "Request Body" body=""
	I1014 19:46:07.522298  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:07.522637  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:08.021220  437269 type.go:168] "Request Body" body=""
	I1014 19:46:08.021314  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:08.021696  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:08.521279  437269 type.go:168] "Request Body" body=""
	I1014 19:46:08.521359  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:08.521778  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:09.021343  437269 type.go:168] "Request Body" body=""
	I1014 19:46:09.021451  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:09.021866  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:09.521382  437269 type.go:168] "Request Body" body=""
	I1014 19:46:09.521459  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:09.521838  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:46:09.521913  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:46:10.021664  437269 type.go:168] "Request Body" body=""
	I1014 19:46:10.021744  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:10.022128  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:10.521668  437269 type.go:168] "Request Body" body=""
	I1014 19:46:10.521745  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:10.522134  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:11.021709  437269 type.go:168] "Request Body" body=""
	I1014 19:46:11.021817  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:11.022226  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:11.521863  437269 type.go:168] "Request Body" body=""
	I1014 19:46:11.521950  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:11.522316  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:46:11.522391  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:46:12.022004  437269 type.go:168] "Request Body" body=""
	I1014 19:46:12.022083  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:12.022466  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:12.522152  437269 type.go:168] "Request Body" body=""
	I1014 19:46:12.522231  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:12.522572  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:13.022208  437269 type.go:168] "Request Body" body=""
	I1014 19:46:13.022306  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:13.022686  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:13.521212  437269 type.go:168] "Request Body" body=""
	I1014 19:46:13.521286  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:13.521620  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:14.021358  437269 type.go:168] "Request Body" body=""
	I1014 19:46:14.021443  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:14.021869  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:46:14.021948  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:46:14.521427  437269 type.go:168] "Request Body" body=""
	I1014 19:46:14.521526  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:14.521830  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:15.021689  437269 type.go:168] "Request Body" body=""
	I1014 19:46:15.021842  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:15.022202  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:15.521922  437269 type.go:168] "Request Body" body=""
	I1014 19:46:15.522020  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:15.522429  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:16.022119  437269 type.go:168] "Request Body" body=""
	I1014 19:46:16.022199  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:16.022517  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:46:16.022586  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:46:16.521207  437269 type.go:168] "Request Body" body=""
	I1014 19:46:16.521315  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:16.521711  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:17.021272  437269 type.go:168] "Request Body" body=""
	I1014 19:46:17.021355  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:17.021723  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:17.521289  437269 type.go:168] "Request Body" body=""
	I1014 19:46:17.521390  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:17.521811  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:18.021359  437269 type.go:168] "Request Body" body=""
	I1014 19:46:18.021443  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:18.021849  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:18.521429  437269 type.go:168] "Request Body" body=""
	I1014 19:46:18.521529  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:18.521905  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:46:18.521988  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:46:19.021521  437269 type.go:168] "Request Body" body=""
	I1014 19:46:19.021615  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:19.022010  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:19.521715  437269 type.go:168] "Request Body" body=""
	I1014 19:46:19.521866  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:19.522297  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:20.022176  437269 type.go:168] "Request Body" body=""
	I1014 19:46:20.022258  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:20.022646  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:20.522243  437269 type.go:168] "Request Body" body=""
	I1014 19:46:20.522333  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:20.522713  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:46:20.522805  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:46:21.021280  437269 type.go:168] "Request Body" body=""
	I1014 19:46:21.021386  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:21.021805  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:21.521347  437269 type.go:168] "Request Body" body=""
	I1014 19:46:21.521438  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:21.521811  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:22.021364  437269 type.go:168] "Request Body" body=""
	I1014 19:46:22.021456  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:22.021861  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:22.521399  437269 type.go:168] "Request Body" body=""
	I1014 19:46:22.521520  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:22.521917  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:23.021531  437269 type.go:168] "Request Body" body=""
	I1014 19:46:23.021637  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:23.022036  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:46:23.022100  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:46:23.521619  437269 type.go:168] "Request Body" body=""
	I1014 19:46:23.521711  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:23.522062  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:24.021884  437269 type.go:168] "Request Body" body=""
	I1014 19:46:24.021977  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:24.022350  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:24.522011  437269 type.go:168] "Request Body" body=""
	I1014 19:46:24.522097  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:24.522508  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:25.021512  437269 type.go:168] "Request Body" body=""
	I1014 19:46:25.021596  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:25.022033  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:25.521632  437269 type.go:168] "Request Body" body=""
	I1014 19:46:25.521726  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:25.522148  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:46:25.522244  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:46:26.021740  437269 type.go:168] "Request Body" body=""
	I1014 19:46:26.021850  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:26.022219  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:26.521873  437269 type.go:168] "Request Body" body=""
	I1014 19:46:26.521956  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:26.522372  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:27.022036  437269 type.go:168] "Request Body" body=""
	I1014 19:46:27.022129  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:27.022489  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:27.522188  437269 type.go:168] "Request Body" body=""
	I1014 19:46:27.522279  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:27.522655  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:46:27.522745  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:46:28.021236  437269 type.go:168] "Request Body" body=""
	I1014 19:46:28.021317  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:28.021676  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:28.521949  437269 type.go:168] "Request Body" body=""
	I1014 19:46:28.522027  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:28.522409  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:29.022101  437269 type.go:168] "Request Body" body=""
	I1014 19:46:29.022190  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:29.022539  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:29.522171  437269 type.go:168] "Request Body" body=""
	I1014 19:46:29.522256  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:29.522639  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:30.021643  437269 type.go:168] "Request Body" body=""
	I1014 19:46:30.021778  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:30.022144  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:46:30.022208  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:46:30.521811  437269 type.go:168] "Request Body" body=""
	I1014 19:46:30.521894  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:30.522289  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:31.022066  437269 type.go:168] "Request Body" body=""
	I1014 19:46:31.022164  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:31.022558  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:31.522208  437269 type.go:168] "Request Body" body=""
	I1014 19:46:31.522295  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:31.522719  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:32.021314  437269 type.go:168] "Request Body" body=""
	I1014 19:46:32.021414  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:32.021832  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:32.521364  437269 type.go:168] "Request Body" body=""
	I1014 19:46:32.521461  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:32.521854  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:46:32.521920  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:46:33.021401  437269 type.go:168] "Request Body" body=""
	I1014 19:46:33.021513  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:33.022010  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:33.521545  437269 type.go:168] "Request Body" body=""
	I1014 19:46:33.521653  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:33.522075  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:34.021736  437269 type.go:168] "Request Body" body=""
	I1014 19:46:34.022027  437269 node_ready.go:38] duration metric: took 6m0.00093705s for node "functional-744288" to be "Ready" ...
	I1014 19:46:34.025220  437269 out.go:203] 
	W1014 19:46:34.026860  437269 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1014 19:46:34.026878  437269 out.go:285] * 
	W1014 19:46:34.028574  437269 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 19:46:34.030019  437269 out.go:203] 
	
	
	==> CRI-O <==
	Oct 14 19:46:25 functional-744288 crio[2959]: time="2025-10-14T19:46:25.865897028Z" level=info msg="createCtr: removing container ccfc95ec370c10a716864fba39534e209cf0a9312e0db89b974a3376ffb370eb" id=86cb8549-6226-44a6-bfd3-04e5ed39afcd name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:46:25 functional-744288 crio[2959]: time="2025-10-14T19:46:25.865934422Z" level=info msg="createCtr: deleting container ccfc95ec370c10a716864fba39534e209cf0a9312e0db89b974a3376ffb370eb from storage" id=86cb8549-6226-44a6-bfd3-04e5ed39afcd name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:46:25 functional-744288 crio[2959]: time="2025-10-14T19:46:25.868294101Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-functional-744288_kube-system_b1fd55382fcf5a735f17d7c6c4ddad91_0" id=86cb8549-6226-44a6-bfd3-04e5ed39afcd name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:46:27 functional-744288 crio[2959]: time="2025-10-14T19:46:27.836445983Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=6720772a-365f-4781-b1cb-e939e61a06dd name=/runtime.v1.ImageService/ImageStatus
	Oct 14 19:46:27 functional-744288 crio[2959]: time="2025-10-14T19:46:27.83732868Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=34baf7b3-8454-46e1-a99d-960fa0cd9960 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 19:46:27 functional-744288 crio[2959]: time="2025-10-14T19:46:27.838169422Z" level=info msg="Creating container: kube-system/etcd-functional-744288/etcd" id=9740332f-0811-4b73-9383-c46c5fe0835e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:46:27 functional-744288 crio[2959]: time="2025-10-14T19:46:27.838395543Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 19:46:27 functional-744288 crio[2959]: time="2025-10-14T19:46:27.841772406Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 19:46:27 functional-744288 crio[2959]: time="2025-10-14T19:46:27.842221085Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 19:46:27 functional-744288 crio[2959]: time="2025-10-14T19:46:27.858687002Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=9740332f-0811-4b73-9383-c46c5fe0835e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:46:27 functional-744288 crio[2959]: time="2025-10-14T19:46:27.860065361Z" level=info msg="createCtr: deleting container ID 9270548b27f70a937f8292953c95d0e27e84d0b0e7f88e9c1caa4e28f165c013 from idIndex" id=9740332f-0811-4b73-9383-c46c5fe0835e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:46:27 functional-744288 crio[2959]: time="2025-10-14T19:46:27.860102297Z" level=info msg="createCtr: removing container 9270548b27f70a937f8292953c95d0e27e84d0b0e7f88e9c1caa4e28f165c013" id=9740332f-0811-4b73-9383-c46c5fe0835e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:46:27 functional-744288 crio[2959]: time="2025-10-14T19:46:27.860131936Z" level=info msg="createCtr: deleting container 9270548b27f70a937f8292953c95d0e27e84d0b0e7f88e9c1caa4e28f165c013 from storage" id=9740332f-0811-4b73-9383-c46c5fe0835e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:46:27 functional-744288 crio[2959]: time="2025-10-14T19:46:27.862154889Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-functional-744288_kube-system_07f65d41bdafe0b0f1a2009eadad0a38_0" id=9740332f-0811-4b73-9383-c46c5fe0835e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:46:33 functional-744288 crio[2959]: time="2025-10-14T19:46:33.836454753Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=9f0e354c-f410-49cf-b40b-5dc3a2f068d1 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 19:46:33 functional-744288 crio[2959]: time="2025-10-14T19:46:33.837508308Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=ecf135b6-fb09-4b1e-818a-ea425e1d5802 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 19:46:33 functional-744288 crio[2959]: time="2025-10-14T19:46:33.838541155Z" level=info msg="Creating container: kube-system/kube-apiserver-functional-744288/kube-apiserver" id=fa2f23ff-1580-4db8-ab00-5272f74c53b2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:46:33 functional-744288 crio[2959]: time="2025-10-14T19:46:33.83878557Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 19:46:33 functional-744288 crio[2959]: time="2025-10-14T19:46:33.842384767Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 19:46:33 functional-744288 crio[2959]: time="2025-10-14T19:46:33.84286761Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 19:46:33 functional-744288 crio[2959]: time="2025-10-14T19:46:33.863270708Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=fa2f23ff-1580-4db8-ab00-5272f74c53b2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:46:33 functional-744288 crio[2959]: time="2025-10-14T19:46:33.864875307Z" level=info msg="createCtr: deleting container ID 7ce193f3d90a0164dc5f8a119bedab1855a8d1ceee719b1104fb805a11139ec2 from idIndex" id=fa2f23ff-1580-4db8-ab00-5272f74c53b2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:46:33 functional-744288 crio[2959]: time="2025-10-14T19:46:33.864933692Z" level=info msg="createCtr: removing container 7ce193f3d90a0164dc5f8a119bedab1855a8d1ceee719b1104fb805a11139ec2" id=fa2f23ff-1580-4db8-ab00-5272f74c53b2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:46:33 functional-744288 crio[2959]: time="2025-10-14T19:46:33.864984536Z" level=info msg="createCtr: deleting container 7ce193f3d90a0164dc5f8a119bedab1855a8d1ceee719b1104fb805a11139ec2 from storage" id=fa2f23ff-1580-4db8-ab00-5272f74c53b2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:46:33 functional-744288 crio[2959]: time="2025-10-14T19:46:33.867380722Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-functional-744288_kube-system_7dacb23619ff0889511bcb2e81339e77_0" id=fa2f23ff-1580-4db8-ab00-5272f74c53b2 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:46:35.775968    4356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:46:35.776544    4356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:46:35.778180    4356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:46:35.778669    4356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:46:35.779838    4356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 91 42 a8 46 f5 08 06
	[ +33.952077] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff aa 17 60 38 7c 63 08 06
	[  +0.961658] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ba 77 81 98 05 a1 08 06
	[  +0.043334] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 8a 2f 57 55 4c d7 08 06
	[  +4.681081] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f6 ac 51 a4 b2 5a 08 06
	[Oct14 19:04] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e6 d1 de e1 85 25 08 06
	[  +0.899784] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ae f2 fe 51 3d 95 08 06
	[  +0.040749] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 82 c6 36 89 b8 4a 08 06
	[  +4.162603] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c6 62 d5 5c 0e ef 08 06
	[ +30.801371] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 06 ae e5 28 c7 39 08 06
	[  +0.817897] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ae 5e 5e 90 62 1a 08 06
	[  +0.046959] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 96 8f 1b bd b9 9f 08 06
	[Oct14 19:05] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 56 59 c3 7c db c4 08 06
	
	
	==> kernel <==
	 19:46:35 up  2:29,  0 user,  load average: 0.03, 0.05, 2.27
	Linux functional-744288 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 14 19:46:27 functional-744288 kubelet[1809]: E1014 19:46:27.836002    1809 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-744288\" not found" node="functional-744288"
	Oct 14 19:46:27 functional-744288 kubelet[1809]: E1014 19:46:27.862434    1809 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 14 19:46:27 functional-744288 kubelet[1809]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 14 19:46:27 functional-744288 kubelet[1809]:  > podSandboxID="de75312ccca355aabaabb18a5eb1e6d7a7e4d5b3fb088ce1c5eb28a39d567355"
	Oct 14 19:46:27 functional-744288 kubelet[1809]: E1014 19:46:27.862531    1809 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 14 19:46:27 functional-744288 kubelet[1809]:         container etcd start failed in pod etcd-functional-744288_kube-system(07f65d41bdafe0b0f1a2009eadad0a38): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 14 19:46:27 functional-744288 kubelet[1809]:  > logger="UnhandledError"
	Oct 14 19:46:27 functional-744288 kubelet[1809]: E1014 19:46:27.862564    1809 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-functional-744288" podUID="07f65d41bdafe0b0f1a2009eadad0a38"
	Oct 14 19:46:27 functional-744288 kubelet[1809]: E1014 19:46:27.883708    1809 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-744288\" not found"
	Oct 14 19:46:28 functional-744288 kubelet[1809]: E1014 19:46:28.516743    1809 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-744288?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Oct 14 19:46:28 functional-744288 kubelet[1809]: I1014 19:46:28.739368    1809 kubelet_node_status.go:75] "Attempting to register node" node="functional-744288"
	Oct 14 19:46:28 functional-744288 kubelet[1809]: E1014 19:46:28.739808    1809 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-744288"
	Oct 14 19:46:33 functional-744288 kubelet[1809]: E1014 19:46:33.835893    1809 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-744288\" not found" node="functional-744288"
	Oct 14 19:46:33 functional-744288 kubelet[1809]: E1014 19:46:33.867767    1809 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 14 19:46:33 functional-744288 kubelet[1809]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 14 19:46:33 functional-744288 kubelet[1809]:  > podSandboxID="d501fdff2b92902ecd1a22b235a50d225f771b04701776d8a1bb0e78b9481d1c"
	Oct 14 19:46:33 functional-744288 kubelet[1809]: E1014 19:46:33.867885    1809 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 14 19:46:33 functional-744288 kubelet[1809]:         container kube-apiserver start failed in pod kube-apiserver-functional-744288_kube-system(7dacb23619ff0889511bcb2e81339e77): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 14 19:46:33 functional-744288 kubelet[1809]:  > logger="UnhandledError"
	Oct 14 19:46:33 functional-744288 kubelet[1809]: E1014 19:46:33.867920    1809 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-functional-744288" podUID="7dacb23619ff0889511bcb2e81339e77"
	Oct 14 19:46:34 functional-744288 kubelet[1809]: E1014 19:46:34.372128    1809 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	Oct 14 19:46:34 functional-744288 kubelet[1809]: E1014 19:46:34.925423    1809 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://192.168.49.2:8441/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError"
	Oct 14 19:46:35 functional-744288 kubelet[1809]: E1014 19:46:35.517271    1809 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-744288?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Oct 14 19:46:35 functional-744288 kubelet[1809]: I1014 19:46:35.741651    1809 kubelet_node_status.go:75] "Attempting to register node" node="functional-744288"
	Oct 14 19:46:35 functional-744288 kubelet[1809]: E1014 19:46:35.742085    1809 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-744288"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-744288 -n functional-744288
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-744288 -n functional-744288: exit status 2 (310.439192ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-744288" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/SoftStart (366.74s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (2.21s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-744288 get po -A
functional_test.go:711: (dbg) Non-zero exit: kubectl --context functional-744288 get po -A: exit status 1 (58.907027ms)

                                                
                                                
** stderr ** 
	E1014 19:46:36.790834  440880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1014 19:46:36.791349  440880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1014 19:46:36.792845  440880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1014 19:46:36.793206  440880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1014 19:46:36.794611  440880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:713: failed to get kubectl pods: args "kubectl --context functional-744288 get po -A" : exit status 1
functional_test.go:717: expected stderr to be empty but got *"E1014 19:46:36.790834  440880 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"https://192.168.49.2:8441/api?timeout=32s\\\": dial tcp 192.168.49.2:8441: connect: connection refused\"\nE1014 19:46:36.791349  440880 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"https://192.168.49.2:8441/api?timeout=32s\\\": dial tcp 192.168.49.2:8441: connect: connection refused\"\nE1014 19:46:36.792845  440880 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"https://192.168.49.2:8441/api?timeout=32s\\\": dial tcp 192.168.49.2:8441: connect: connection refused\"\nE1014 19:46:36.793206  440880 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"https://192.168.49.2:8441/api?timeout=32s\\\": dial tcp 192.168.49.2:8441: connect: connection refused\"\nE1014 19:46:36.794611  440880 memc
ache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"https://192.168.49.2:8441/api?timeout=32s\\\": dial tcp 192.168.49.2:8441: connect: connection refused\"\nThe connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?\n"*: args "kubectl --context functional-744288 get po -A"
functional_test.go:720: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-744288 get po -A"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/serial/KubectlGetPods]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/serial/KubectlGetPods]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-744288
helpers_test.go:243: (dbg) docker inspect functional-744288:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ca55a7aa3ac1a055c5c96669019f558795a313505e680734901ed4465642a23a",
	        "Created": "2025-10-14T19:32:11.700856501Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 432350,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-14T19:32:11.736879267Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/ca55a7aa3ac1a055c5c96669019f558795a313505e680734901ed4465642a23a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ca55a7aa3ac1a055c5c96669019f558795a313505e680734901ed4465642a23a/hostname",
	        "HostsPath": "/var/lib/docker/containers/ca55a7aa3ac1a055c5c96669019f558795a313505e680734901ed4465642a23a/hosts",
	        "LogPath": "/var/lib/docker/containers/ca55a7aa3ac1a055c5c96669019f558795a313505e680734901ed4465642a23a/ca55a7aa3ac1a055c5c96669019f558795a313505e680734901ed4465642a23a-json.log",
	        "Name": "/functional-744288",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-744288:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-744288",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ca55a7aa3ac1a055c5c96669019f558795a313505e680734901ed4465642a23a",
	                "LowerDir": "/var/lib/docker/overlay2/7474e2653fad8f96d0b2edb2947dba52f432e8f6324d88224718eafd377e5e8b-init/diff:/var/lib/docker/overlay2/51cca15559205c73df9571f03495ed8a29f085405673af5fef2c1ba7c695d8af/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7474e2653fad8f96d0b2edb2947dba52f432e8f6324d88224718eafd377e5e8b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7474e2653fad8f96d0b2edb2947dba52f432e8f6324d88224718eafd377e5e8b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7474e2653fad8f96d0b2edb2947dba52f432e8f6324d88224718eafd377e5e8b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-744288",
	                "Source": "/var/lib/docker/volumes/functional-744288/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-744288",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-744288",
	                "name.minikube.sigs.k8s.io": "functional-744288",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "697bcb3f8caffcc21274484944886a08e42751728789c447339536a0c178eab6",
	            "SandboxKey": "/var/run/docker/netns/697bcb3f8caf",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32898"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32899"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32902"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32900"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32901"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-744288": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b6:cd:04:2d:45:3c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a03a168f8787c5e4cc68b4fb2601645260a265eddce7ef101976f25cd32bdabb",
	                    "EndpointID": "c7c49faacc3b14ce4dd3f84ad70993167a6a8b1490739ae12d7ca1980d176ee7",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-744288",
	                        "ca55a7aa3ac1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-744288 -n functional-744288
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-744288 -n functional-744288: exit status 2 (308.821676ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctional/serial/KubectlGetPods FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/serial/KubectlGetPods]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-744288 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-744288 logs -n 25: (1.019978055s)
helpers_test.go:260: TestFunctional/serial/KubectlGetPods logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-102449                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-102449   │ jenkins │ v1.37.0 │ 14 Oct 25 19:14 UTC │ 14 Oct 25 19:14 UTC │
	│ start   │ --download-only -p download-docker-042272 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-042272 │ jenkins │ v1.37.0 │ 14 Oct 25 19:14 UTC │                     │
	│ delete  │ -p download-docker-042272                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-042272 │ jenkins │ v1.37.0 │ 14 Oct 25 19:14 UTC │ 14 Oct 25 19:14 UTC │
	│ start   │ --download-only -p binary-mirror-194366 --alsologtostderr --binary-mirror http://127.0.0.1:45401 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-194366   │ jenkins │ v1.37.0 │ 14 Oct 25 19:14 UTC │                     │
	│ delete  │ -p binary-mirror-194366                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-194366   │ jenkins │ v1.37.0 │ 14 Oct 25 19:14 UTC │ 14 Oct 25 19:14 UTC │
	│ addons  │ enable dashboard -p addons-995790                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-995790          │ jenkins │ v1.37.0 │ 14 Oct 25 19:14 UTC │                     │
	│ addons  │ disable dashboard -p addons-995790                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-995790          │ jenkins │ v1.37.0 │ 14 Oct 25 19:14 UTC │                     │
	│ start   │ -p addons-995790 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-995790          │ jenkins │ v1.37.0 │ 14 Oct 25 19:14 UTC │                     │
	│ delete  │ -p addons-995790                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-995790          │ jenkins │ v1.37.0 │ 14 Oct 25 19:23 UTC │ 14 Oct 25 19:23 UTC │
	│ start   │ -p nospam-442016 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-442016 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                  │ nospam-442016          │ jenkins │ v1.37.0 │ 14 Oct 25 19:23 UTC │                     │
	│ start   │ nospam-442016 --log_dir /tmp/nospam-442016 start --dry-run                                                                                                                                                                                                                                                                                                                                                                                                               │ nospam-442016          │ jenkins │ v1.37.0 │ 14 Oct 25 19:31 UTC │                     │
	│ start   │ nospam-442016 --log_dir /tmp/nospam-442016 start --dry-run                                                                                                                                                                                                                                                                                                                                                                                                               │ nospam-442016          │ jenkins │ v1.37.0 │ 14 Oct 25 19:31 UTC │                     │
	│ start   │ nospam-442016 --log_dir /tmp/nospam-442016 start --dry-run                                                                                                                                                                                                                                                                                                                                                                                                               │ nospam-442016          │ jenkins │ v1.37.0 │ 14 Oct 25 19:31 UTC │                     │
	│ pause   │ nospam-442016 --log_dir /tmp/nospam-442016 pause                                                                                                                                                                                                                                                                                                                                                                                                                         │ nospam-442016          │ jenkins │ v1.37.0 │ 14 Oct 25 19:31 UTC │ 14 Oct 25 19:31 UTC │
	│ pause   │ nospam-442016 --log_dir /tmp/nospam-442016 pause                                                                                                                                                                                                                                                                                                                                                                                                                         │ nospam-442016          │ jenkins │ v1.37.0 │ 14 Oct 25 19:31 UTC │ 14 Oct 25 19:31 UTC │
	│ pause   │ nospam-442016 --log_dir /tmp/nospam-442016 pause                                                                                                                                                                                                                                                                                                                                                                                                                         │ nospam-442016          │ jenkins │ v1.37.0 │ 14 Oct 25 19:31 UTC │ 14 Oct 25 19:31 UTC │
	│ unpause │ nospam-442016 --log_dir /tmp/nospam-442016 unpause                                                                                                                                                                                                                                                                                                                                                                                                                       │ nospam-442016          │ jenkins │ v1.37.0 │ 14 Oct 25 19:31 UTC │ 14 Oct 25 19:31 UTC │
	│ unpause │ nospam-442016 --log_dir /tmp/nospam-442016 unpause                                                                                                                                                                                                                                                                                                                                                                                                                       │ nospam-442016          │ jenkins │ v1.37.0 │ 14 Oct 25 19:31 UTC │ 14 Oct 25 19:31 UTC │
	│ unpause │ nospam-442016 --log_dir /tmp/nospam-442016 unpause                                                                                                                                                                                                                                                                                                                                                                                                                       │ nospam-442016          │ jenkins │ v1.37.0 │ 14 Oct 25 19:31 UTC │ 14 Oct 25 19:32 UTC │
	│ stop    │ nospam-442016 --log_dir /tmp/nospam-442016 stop                                                                                                                                                                                                                                                                                                                                                                                                                          │ nospam-442016          │ jenkins │ v1.37.0 │ 14 Oct 25 19:32 UTC │ 14 Oct 25 19:32 UTC │
	│ stop    │ nospam-442016 --log_dir /tmp/nospam-442016 stop                                                                                                                                                                                                                                                                                                                                                                                                                          │ nospam-442016          │ jenkins │ v1.37.0 │ 14 Oct 25 19:32 UTC │ 14 Oct 25 19:32 UTC │
	│ stop    │ nospam-442016 --log_dir /tmp/nospam-442016 stop                                                                                                                                                                                                                                                                                                                                                                                                                          │ nospam-442016          │ jenkins │ v1.37.0 │ 14 Oct 25 19:32 UTC │ 14 Oct 25 19:32 UTC │
	│ delete  │ -p nospam-442016                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ nospam-442016          │ jenkins │ v1.37.0 │ 14 Oct 25 19:32 UTC │ 14 Oct 25 19:32 UTC │
	│ start   │ -p functional-744288 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                            │ functional-744288      │ jenkins │ v1.37.0 │ 14 Oct 25 19:32 UTC │                     │
	│ start   │ -p functional-744288 --alsologtostderr -v=8                                                                                                                                                                                                                                                                                                                                                                                                                              │ functional-744288      │ jenkins │ v1.37.0 │ 14 Oct 25 19:40 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/14 19:40:29
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1014 19:40:29.999204  437269 out.go:360] Setting OutFile to fd 1 ...
	I1014 19:40:29.999451  437269 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 19:40:29.999459  437269 out.go:374] Setting ErrFile to fd 2...
	I1014 19:40:29.999463  437269 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 19:40:29.999664  437269 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-413763/.minikube/bin
	I1014 19:40:30.000162  437269 out.go:368] Setting JSON to false
	I1014 19:40:30.001140  437269 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":8576,"bootTime":1760462254,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1014 19:40:30.001253  437269 start.go:141] virtualization: kvm guest
	I1014 19:40:30.003929  437269 out.go:179] * [functional-744288] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1014 19:40:30.005394  437269 out.go:179]   - MINIKUBE_LOCATION=21409
	I1014 19:40:30.005413  437269 notify.go:220] Checking for updates...
	I1014 19:40:30.008578  437269 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 19:40:30.009922  437269 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-413763/kubeconfig
	I1014 19:40:30.011325  437269 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-413763/.minikube
	I1014 19:40:30.012721  437269 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1014 19:40:30.014074  437269 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 19:40:30.015738  437269 config.go:182] Loaded profile config "functional-744288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 19:40:30.015851  437269 driver.go:421] Setting default libvirt URI to qemu:///system
	I1014 19:40:30.041344  437269 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1014 19:40:30.041571  437269 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 19:40:30.106855  437269 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-14 19:40:30.095983875 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1014 19:40:30.106976  437269 docker.go:318] overlay module found
	I1014 19:40:30.108953  437269 out.go:179] * Using the docker driver based on existing profile
	I1014 19:40:30.110337  437269 start.go:305] selected driver: docker
	I1014 19:40:30.110363  437269 start.go:925] validating driver "docker" against &{Name:functional-744288 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-744288 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 19:40:30.110446  437269 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 19:40:30.110529  437269 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 19:40:30.176521  437269 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-14 19:40:30.165510899 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1014 19:40:30.177154  437269 cni.go:84] Creating CNI manager for ""
	I1014 19:40:30.177215  437269 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1014 19:40:30.177273  437269 start.go:349] cluster config:
	{Name:functional-744288 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-744288 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 19:40:30.179329  437269 out.go:179] * Starting "functional-744288" primary control-plane node in "functional-744288" cluster
	I1014 19:40:30.180795  437269 cache.go:123] Beginning downloading kic base image for docker with crio
	I1014 19:40:30.182356  437269 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1014 19:40:30.183701  437269 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 19:40:30.183742  437269 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-413763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1014 19:40:30.183752  437269 cache.go:58] Caching tarball of preloaded images
	I1014 19:40:30.183799  437269 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1014 19:40:30.183863  437269 preload.go:233] Found /home/jenkins/minikube-integration/21409-413763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1014 19:40:30.183877  437269 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1014 19:40:30.183979  437269 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/config.json ...
	I1014 19:40:30.204077  437269 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1014 19:40:30.204098  437269 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1014 19:40:30.204114  437269 cache.go:232] Successfully downloaded all kic artifacts
	I1014 19:40:30.204155  437269 start.go:360] acquireMachinesLock for functional-744288: {Name:mk27c3a9a4edec1c99a109c410361619ff35ec14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 19:40:30.204220  437269 start.go:364] duration metric: took 47.096µs to acquireMachinesLock for "functional-744288"
	I1014 19:40:30.204240  437269 start.go:96] Skipping create...Using existing machine configuration
	I1014 19:40:30.204245  437269 fix.go:54] fixHost starting: 
	I1014 19:40:30.204447  437269 cli_runner.go:164] Run: docker container inspect functional-744288 --format={{.State.Status}}
	I1014 19:40:30.222380  437269 fix.go:112] recreateIfNeeded on functional-744288: state=Running err=<nil>
	W1014 19:40:30.222430  437269 fix.go:138] unexpected machine state, will restart: <nil>
	I1014 19:40:30.224794  437269 out.go:252] * Updating the running docker "functional-744288" container ...
	I1014 19:40:30.224832  437269 machine.go:93] provisionDockerMachine start ...
	I1014 19:40:30.224915  437269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-744288
	I1014 19:40:30.243631  437269 main.go:141] libmachine: Using SSH client type: native
	I1014 19:40:30.243897  437269 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32898 <nil> <nil>}
	I1014 19:40:30.243914  437269 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 19:40:30.392088  437269 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-744288
	
	I1014 19:40:30.392121  437269 ubuntu.go:182] provisioning hostname "functional-744288"
	I1014 19:40:30.392200  437269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-744288
	I1014 19:40:30.410333  437269 main.go:141] libmachine: Using SSH client type: native
	I1014 19:40:30.410549  437269 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32898 <nil> <nil>}
	I1014 19:40:30.410563  437269 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-744288 && echo "functional-744288" | sudo tee /etc/hostname
	I1014 19:40:30.567306  437269 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-744288
	
	I1014 19:40:30.567398  437269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-744288
	I1014 19:40:30.585534  437269 main.go:141] libmachine: Using SSH client type: native
	I1014 19:40:30.585774  437269 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32898 <nil> <nil>}
	I1014 19:40:30.585794  437269 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-744288' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-744288/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-744288' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 19:40:30.733740  437269 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 19:40:30.733790  437269 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-413763/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-413763/.minikube}
	I1014 19:40:30.733813  437269 ubuntu.go:190] setting up certificates
	I1014 19:40:30.733825  437269 provision.go:84] configureAuth start
	I1014 19:40:30.733878  437269 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-744288
	I1014 19:40:30.751946  437269 provision.go:143] copyHostCerts
	I1014 19:40:30.751989  437269 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem
	I1014 19:40:30.752023  437269 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem, removing ...
	I1014 19:40:30.752048  437269 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem
	I1014 19:40:30.752133  437269 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem (1078 bytes)
	I1014 19:40:30.752237  437269 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem
	I1014 19:40:30.752267  437269 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem, removing ...
	I1014 19:40:30.752278  437269 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem
	I1014 19:40:30.752320  437269 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem (1123 bytes)
	I1014 19:40:30.752387  437269 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem
	I1014 19:40:30.752412  437269 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem, removing ...
	I1014 19:40:30.752422  437269 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem
	I1014 19:40:30.752463  437269 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem (1675 bytes)
	I1014 19:40:30.752709  437269 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca-key.pem org=jenkins.functional-744288 san=[127.0.0.1 192.168.49.2 functional-744288 localhost minikube]
	I1014 19:40:31.076864  437269 provision.go:177] copyRemoteCerts
	I1014 19:40:31.076930  437269 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 19:40:31.076971  437269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-744288
	I1014 19:40:31.095322  437269 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/functional-744288/id_rsa Username:docker}
	I1014 19:40:31.200396  437269 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1014 19:40:31.200473  437269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 19:40:31.218084  437269 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1014 19:40:31.218140  437269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1014 19:40:31.235905  437269 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1014 19:40:31.235974  437269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1014 19:40:31.253074  437269 provision.go:87] duration metric: took 519.232689ms to configureAuth
	I1014 19:40:31.253110  437269 ubuntu.go:206] setting minikube options for container-runtime
	I1014 19:40:31.253264  437269 config.go:182] Loaded profile config "functional-744288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 19:40:31.253357  437269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-744288
	I1014 19:40:31.271451  437269 main.go:141] libmachine: Using SSH client type: native
	I1014 19:40:31.271661  437269 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32898 <nil> <nil>}
	I1014 19:40:31.271677  437269 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 19:40:31.540521  437269 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 19:40:31.540549  437269 machine.go:96] duration metric: took 1.315709373s to provisionDockerMachine
	I1014 19:40:31.540561  437269 start.go:293] postStartSetup for "functional-744288" (driver="docker")
	I1014 19:40:31.540571  437269 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 19:40:31.540628  437269 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 19:40:31.540669  437269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-744288
	I1014 19:40:31.559297  437269 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/functional-744288/id_rsa Username:docker}
	I1014 19:40:31.665251  437269 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 19:40:31.669234  437269 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1014 19:40:31.669258  437269 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1014 19:40:31.669267  437269 command_runner.go:130] > VERSION_ID="12"
	I1014 19:40:31.669270  437269 command_runner.go:130] > VERSION="12 (bookworm)"
	I1014 19:40:31.669276  437269 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1014 19:40:31.669279  437269 command_runner.go:130] > ID=debian
	I1014 19:40:31.669283  437269 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1014 19:40:31.669288  437269 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1014 19:40:31.669293  437269 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1014 19:40:31.669341  437269 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1014 19:40:31.669359  437269 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1014 19:40:31.669371  437269 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-413763/.minikube/addons for local assets ...
	I1014 19:40:31.669425  437269 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-413763/.minikube/files for local assets ...
	I1014 19:40:31.669510  437269 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem -> 4173732.pem in /etc/ssl/certs
	I1014 19:40:31.669525  437269 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem -> /etc/ssl/certs/4173732.pem
	I1014 19:40:31.669592  437269 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/test/nested/copy/417373/hosts -> hosts in /etc/test/nested/copy/417373
	I1014 19:40:31.669600  437269 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/test/nested/copy/417373/hosts -> /etc/test/nested/copy/417373/hosts
	I1014 19:40:31.669633  437269 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/417373
	I1014 19:40:31.677988  437269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem --> /etc/ssl/certs/4173732.pem (1708 bytes)
	I1014 19:40:31.696543  437269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/test/nested/copy/417373/hosts --> /etc/test/nested/copy/417373/hosts (40 bytes)
	I1014 19:40:31.715275  437269 start.go:296] duration metric: took 174.687158ms for postStartSetup
	I1014 19:40:31.715383  437269 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1014 19:40:31.715428  437269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-744288
	I1014 19:40:31.734376  437269 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/functional-744288/id_rsa Username:docker}
	I1014 19:40:31.836456  437269 command_runner.go:130] > 39%
	I1014 19:40:31.836544  437269 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1014 19:40:31.841513  437269 command_runner.go:130] > 178G
	I1014 19:40:31.841552  437269 fix.go:56] duration metric: took 1.637302821s for fixHost
	I1014 19:40:31.841566  437269 start.go:83] releasing machines lock for "functional-744288", held for 1.637335022s
	I1014 19:40:31.841633  437269 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-744288
	I1014 19:40:31.859002  437269 ssh_runner.go:195] Run: cat /version.json
	I1014 19:40:31.859036  437269 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 19:40:31.859053  437269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-744288
	I1014 19:40:31.859093  437269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-744288
	I1014 19:40:31.877314  437269 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/functional-744288/id_rsa Username:docker}
	I1014 19:40:31.877547  437269 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/functional-744288/id_rsa Username:docker}
	I1014 19:40:31.978415  437269 command_runner.go:130] > {"iso_version": "v1.37.0-1758198818-20370", "kicbase_version": "v0.0.48-1759745255-21703", "minikube_version": "v1.37.0", "commit": "a51fe4b7ffc88febd8814e8831f38772e976d097"}
	I1014 19:40:31.978583  437269 ssh_runner.go:195] Run: systemctl --version
	I1014 19:40:32.030433  437269 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1014 19:40:32.032548  437269 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1014 19:40:32.032581  437269 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1014 19:40:32.032653  437269 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 19:40:32.071124  437269 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1014 19:40:32.075797  437269 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1014 19:40:32.076143  437269 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 19:40:32.076213  437269 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 19:40:32.084774  437269 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1014 19:40:32.084802  437269 start.go:495] detecting cgroup driver to use...
	I1014 19:40:32.084841  437269 detect.go:190] detected "systemd" cgroup driver on host os
	I1014 19:40:32.084885  437269 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 19:40:32.100807  437269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 19:40:32.114918  437269 docker.go:218] disabling cri-docker service (if available) ...
	I1014 19:40:32.115001  437269 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 19:40:32.131082  437269 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 19:40:32.145731  437269 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 19:40:32.234963  437269 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 19:40:32.329593  437269 docker.go:234] disabling docker service ...
	I1014 19:40:32.329671  437269 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 19:40:32.344729  437269 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 19:40:32.357712  437269 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 19:40:32.445038  437269 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 19:40:32.534134  437269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 19:40:32.547615  437269 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 19:40:32.562780  437269 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1014 19:40:32.562835  437269 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1014 19:40:32.562884  437269 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 19:40:32.572580  437269 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1014 19:40:32.572655  437269 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 19:40:32.581715  437269 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 19:40:32.590624  437269 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 19:40:32.599492  437269 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 19:40:32.607979  437269 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 19:40:32.617026  437269 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 19:40:32.625607  437269 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 19:40:32.634661  437269 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 19:40:32.642022  437269 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1014 19:40:32.642101  437269 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 19:40:32.649948  437269 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 19:40:32.737827  437269 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 19:40:32.854779  437269 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 19:40:32.854851  437269 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 19:40:32.859353  437269 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1014 19:40:32.859376  437269 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1014 19:40:32.859382  437269 command_runner.go:130] > Device: 0,59	Inode: 3887        Links: 1
	I1014 19:40:32.859389  437269 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1014 19:40:32.859394  437269 command_runner.go:130] > Access: 2025-10-14 19:40:32.837516724 +0000
	I1014 19:40:32.859399  437269 command_runner.go:130] > Modify: 2025-10-14 19:40:32.837516724 +0000
	I1014 19:40:32.859403  437269 command_runner.go:130] > Change: 2025-10-14 19:40:32.837516724 +0000
	I1014 19:40:32.859408  437269 command_runner.go:130] >  Birth: 2025-10-14 19:40:32.837516724 +0000
	I1014 19:40:32.859438  437269 start.go:563] Will wait 60s for crictl version
	I1014 19:40:32.859485  437269 ssh_runner.go:195] Run: which crictl
	I1014 19:40:32.863222  437269 command_runner.go:130] > /usr/local/bin/crictl
	I1014 19:40:32.863312  437269 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1014 19:40:32.889462  437269 command_runner.go:130] > Version:  0.1.0
	I1014 19:40:32.889482  437269 command_runner.go:130] > RuntimeName:  cri-o
	I1014 19:40:32.889486  437269 command_runner.go:130] > RuntimeVersion:  1.34.1
	I1014 19:40:32.889490  437269 command_runner.go:130] > RuntimeApiVersion:  v1
	I1014 19:40:32.889505  437269 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1014 19:40:32.889559  437269 ssh_runner.go:195] Run: crio --version
	I1014 19:40:32.920224  437269 command_runner.go:130] > crio version 1.34.1
	I1014 19:40:32.920251  437269 command_runner.go:130] >    GitCommit:      8e14bff4153ba033f12ed3ffa3cadaca5425b313
	I1014 19:40:32.920258  437269 command_runner.go:130] >    GitCommitDate:  2025-10-01T13:04:13Z
	I1014 19:40:32.920266  437269 command_runner.go:130] >    GitTreeState:   dirty
	I1014 19:40:32.920279  437269 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1014 19:40:32.920285  437269 command_runner.go:130] >    GoVersion:      go1.24.6
	I1014 19:40:32.920291  437269 command_runner.go:130] >    Compiler:       gc
	I1014 19:40:32.920303  437269 command_runner.go:130] >    Platform:       linux/amd64
	I1014 19:40:32.920312  437269 command_runner.go:130] >    Linkmode:       static
	I1014 19:40:32.920322  437269 command_runner.go:130] >    BuildTags:
	I1014 19:40:32.920332  437269 command_runner.go:130] >      static
	I1014 19:40:32.920340  437269 command_runner.go:130] >      netgo
	I1014 19:40:32.920347  437269 command_runner.go:130] >      osusergo
	I1014 19:40:32.920354  437269 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1014 19:40:32.920358  437269 command_runner.go:130] >      seccomp
	I1014 19:40:32.920361  437269 command_runner.go:130] >      apparmor
	I1014 19:40:32.920367  437269 command_runner.go:130] >      selinux
	I1014 19:40:32.920371  437269 command_runner.go:130] >    LDFlags:          unknown
	I1014 19:40:32.920379  437269 command_runner.go:130] >    SeccompEnabled:   true
	I1014 19:40:32.920383  437269 command_runner.go:130] >    AppArmorEnabled:  false
	I1014 19:40:32.920453  437269 ssh_runner.go:195] Run: crio --version
	I1014 19:40:32.949467  437269 command_runner.go:130] > crio version 1.34.1
	I1014 19:40:32.949490  437269 command_runner.go:130] >    GitCommit:      8e14bff4153ba033f12ed3ffa3cadaca5425b313
	I1014 19:40:32.949495  437269 command_runner.go:130] >    GitCommitDate:  2025-10-01T13:04:13Z
	I1014 19:40:32.949499  437269 command_runner.go:130] >    GitTreeState:   dirty
	I1014 19:40:32.949504  437269 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1014 19:40:32.949508  437269 command_runner.go:130] >    GoVersion:      go1.24.6
	I1014 19:40:32.949514  437269 command_runner.go:130] >    Compiler:       gc
	I1014 19:40:32.949525  437269 command_runner.go:130] >    Platform:       linux/amd64
	I1014 19:40:32.949534  437269 command_runner.go:130] >    Linkmode:       static
	I1014 19:40:32.949540  437269 command_runner.go:130] >    BuildTags:
	I1014 19:40:32.949546  437269 command_runner.go:130] >      static
	I1014 19:40:32.949555  437269 command_runner.go:130] >      netgo
	I1014 19:40:32.949560  437269 command_runner.go:130] >      osusergo
	I1014 19:40:32.949567  437269 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1014 19:40:32.949571  437269 command_runner.go:130] >      seccomp
	I1014 19:40:32.949576  437269 command_runner.go:130] >      apparmor
	I1014 19:40:32.949582  437269 command_runner.go:130] >      selinux
	I1014 19:40:32.949588  437269 command_runner.go:130] >    LDFlags:          unknown
	I1014 19:40:32.949592  437269 command_runner.go:130] >    SeccompEnabled:   true
	I1014 19:40:32.949599  437269 command_runner.go:130] >    AppArmorEnabled:  false
	I1014 19:40:32.952722  437269 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1014 19:40:32.953989  437269 cli_runner.go:164] Run: docker network inspect functional-744288 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1014 19:40:32.971672  437269 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1014 19:40:32.976098  437269 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1014 19:40:32.976178  437269 kubeadm.go:883] updating cluster {Name:functional-744288 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-744288 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 19:40:32.976267  437269 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 19:40:32.976332  437269 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 19:40:33.006155  437269 command_runner.go:130] > {
	I1014 19:40:33.006181  437269 command_runner.go:130] >   "images":  [
	I1014 19:40:33.006186  437269 command_runner.go:130] >     {
	I1014 19:40:33.006194  437269 command_runner.go:130] >       "id":  "409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c",
	I1014 19:40:33.006200  437269 command_runner.go:130] >       "repoTags":  [
	I1014 19:40:33.006209  437269 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1014 19:40:33.006213  437269 command_runner.go:130] >       ],
	I1014 19:40:33.006218  437269 command_runner.go:130] >       "repoDigests":  [
	I1014 19:40:33.006232  437269 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1014 19:40:33.006248  437269 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"
	I1014 19:40:33.006257  437269 command_runner.go:130] >       ],
	I1014 19:40:33.006270  437269 command_runner.go:130] >       "size":  "109379124",
	I1014 19:40:33.006276  437269 command_runner.go:130] >       "username":  "",
	I1014 19:40:33.006281  437269 command_runner.go:130] >       "pinned":  false
	I1014 19:40:33.006287  437269 command_runner.go:130] >     },
	I1014 19:40:33.006290  437269 command_runner.go:130] >     {
	I1014 19:40:33.006304  437269 command_runner.go:130] >       "id":  "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1014 19:40:33.006316  437269 command_runner.go:130] >       "repoTags":  [
	I1014 19:40:33.006324  437269 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1014 19:40:33.006330  437269 command_runner.go:130] >       ],
	I1014 19:40:33.006335  437269 command_runner.go:130] >       "repoDigests":  [
	I1014 19:40:33.006348  437269 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1014 19:40:33.006364  437269 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1014 19:40:33.006372  437269 command_runner.go:130] >       ],
	I1014 19:40:33.006379  437269 command_runner.go:130] >       "size":  "31470524",
	I1014 19:40:33.006388  437269 command_runner.go:130] >       "username":  "",
	I1014 19:40:33.006398  437269 command_runner.go:130] >       "pinned":  false
	I1014 19:40:33.006402  437269 command_runner.go:130] >     },
	I1014 19:40:33.006405  437269 command_runner.go:130] >     {
	I1014 19:40:33.006413  437269 command_runner.go:130] >       "id":  "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969",
	I1014 19:40:33.006422  437269 command_runner.go:130] >       "repoTags":  [
	I1014 19:40:33.006431  437269 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.12.1"
	I1014 19:40:33.006441  437269 command_runner.go:130] >       ],
	I1014 19:40:33.006448  437269 command_runner.go:130] >       "repoDigests":  [
	I1014 19:40:33.006463  437269 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998",
	I1014 19:40:33.006477  437269 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"
	I1014 19:40:33.006486  437269 command_runner.go:130] >       ],
	I1014 19:40:33.006496  437269 command_runner.go:130] >       "size":  "76103547",
	I1014 19:40:33.006505  437269 command_runner.go:130] >       "username":  "nonroot",
	I1014 19:40:33.006513  437269 command_runner.go:130] >       "pinned":  false
	I1014 19:40:33.006516  437269 command_runner.go:130] >     },
	I1014 19:40:33.006525  437269 command_runner.go:130] >     {
	I1014 19:40:33.006535  437269 command_runner.go:130] >       "id":  "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115",
	I1014 19:40:33.006545  437269 command_runner.go:130] >       "repoTags":  [
	I1014 19:40:33.006555  437269 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.4-0"
	I1014 19:40:33.006563  437269 command_runner.go:130] >       ],
	I1014 19:40:33.006570  437269 command_runner.go:130] >       "repoDigests":  [
	I1014 19:40:33.006584  437269 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f",
	I1014 19:40:33.006598  437269 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"
	I1014 19:40:33.006607  437269 command_runner.go:130] >       ],
	I1014 19:40:33.006615  437269 command_runner.go:130] >       "size":  "195976448",
	I1014 19:40:33.006619  437269 command_runner.go:130] >       "uid":  {
	I1014 19:40:33.006624  437269 command_runner.go:130] >         "value":  "0"
	I1014 19:40:33.006632  437269 command_runner.go:130] >       },
	I1014 19:40:33.006646  437269 command_runner.go:130] >       "username":  "",
	I1014 19:40:33.006657  437269 command_runner.go:130] >       "pinned":  false
	I1014 19:40:33.006667  437269 command_runner.go:130] >     },
	I1014 19:40:33.006675  437269 command_runner.go:130] >     {
	I1014 19:40:33.006689  437269 command_runner.go:130] >       "id":  "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97",
	I1014 19:40:33.006695  437269 command_runner.go:130] >       "repoTags":  [
	I1014 19:40:33.006707  437269 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.34.1"
	I1014 19:40:33.006714  437269 command_runner.go:130] >       ],
	I1014 19:40:33.006718  437269 command_runner.go:130] >       "repoDigests":  [
	I1014 19:40:33.006732  437269 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964",
	I1014 19:40:33.006748  437269 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"
	I1014 19:40:33.006767  437269 command_runner.go:130] >       ],
	I1014 19:40:33.006778  437269 command_runner.go:130] >       "size":  "89046001",
	I1014 19:40:33.006786  437269 command_runner.go:130] >       "uid":  {
	I1014 19:40:33.006795  437269 command_runner.go:130] >         "value":  "0"
	I1014 19:40:33.006803  437269 command_runner.go:130] >       },
	I1014 19:40:33.006809  437269 command_runner.go:130] >       "username":  "",
	I1014 19:40:33.006819  437269 command_runner.go:130] >       "pinned":  false
	I1014 19:40:33.006827  437269 command_runner.go:130] >     },
	I1014 19:40:33.006835  437269 command_runner.go:130] >     {
	I1014 19:40:33.006846  437269 command_runner.go:130] >       "id":  "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f",
	I1014 19:40:33.006855  437269 command_runner.go:130] >       "repoTags":  [
	I1014 19:40:33.006865  437269 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.34.1"
	I1014 19:40:33.006874  437269 command_runner.go:130] >       ],
	I1014 19:40:33.006884  437269 command_runner.go:130] >       "repoDigests":  [
	I1014 19:40:33.006899  437269 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89",
	I1014 19:40:33.006910  437269 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"
	I1014 19:40:33.006918  437269 command_runner.go:130] >       ],
	I1014 19:40:33.006926  437269 command_runner.go:130] >       "size":  "76004181",
	I1014 19:40:33.006935  437269 command_runner.go:130] >       "uid":  {
	I1014 19:40:33.006948  437269 command_runner.go:130] >         "value":  "0"
	I1014 19:40:33.006957  437269 command_runner.go:130] >       },
	I1014 19:40:33.006967  437269 command_runner.go:130] >       "username":  "",
	I1014 19:40:33.006976  437269 command_runner.go:130] >       "pinned":  false
	I1014 19:40:33.006985  437269 command_runner.go:130] >     },
	I1014 19:40:33.006993  437269 command_runner.go:130] >     {
	I1014 19:40:33.007004  437269 command_runner.go:130] >       "id":  "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7",
	I1014 19:40:33.007011  437269 command_runner.go:130] >       "repoTags":  [
	I1014 19:40:33.007019  437269 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.34.1"
	I1014 19:40:33.007027  437269 command_runner.go:130] >       ],
	I1014 19:40:33.007037  437269 command_runner.go:130] >       "repoDigests":  [
	I1014 19:40:33.007052  437269 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a",
	I1014 19:40:33.007067  437269 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"
	I1014 19:40:33.007076  437269 command_runner.go:130] >       ],
	I1014 19:40:33.007084  437269 command_runner.go:130] >       "size":  "73138073",
	I1014 19:40:33.007092  437269 command_runner.go:130] >       "username":  "",
	I1014 19:40:33.007095  437269 command_runner.go:130] >       "pinned":  false
	I1014 19:40:33.007103  437269 command_runner.go:130] >     },
	I1014 19:40:33.007109  437269 command_runner.go:130] >     {
	I1014 19:40:33.007123  437269 command_runner.go:130] >       "id":  "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813",
	I1014 19:40:33.007132  437269 command_runner.go:130] >       "repoTags":  [
	I1014 19:40:33.007142  437269 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.34.1"
	I1014 19:40:33.007152  437269 command_runner.go:130] >       ],
	I1014 19:40:33.007162  437269 command_runner.go:130] >       "repoDigests":  [
	I1014 19:40:33.007175  437269 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31",
	I1014 19:40:33.007194  437269 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"
	I1014 19:40:33.007203  437269 command_runner.go:130] >       ],
	I1014 19:40:33.007213  437269 command_runner.go:130] >       "size":  "53844823",
	I1014 19:40:33.007220  437269 command_runner.go:130] >       "uid":  {
	I1014 19:40:33.007229  437269 command_runner.go:130] >         "value":  "0"
	I1014 19:40:33.007237  437269 command_runner.go:130] >       },
	I1014 19:40:33.007246  437269 command_runner.go:130] >       "username":  "",
	I1014 19:40:33.007253  437269 command_runner.go:130] >       "pinned":  false
	I1014 19:40:33.007260  437269 command_runner.go:130] >     },
	I1014 19:40:33.007266  437269 command_runner.go:130] >     {
	I1014 19:40:33.007278  437269 command_runner.go:130] >       "id":  "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f",
	I1014 19:40:33.007285  437269 command_runner.go:130] >       "repoTags":  [
	I1014 19:40:33.007290  437269 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1014 19:40:33.007298  437269 command_runner.go:130] >       ],
	I1014 19:40:33.007308  437269 command_runner.go:130] >       "repoDigests":  [
	I1014 19:40:33.007320  437269 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1014 19:40:33.007334  437269 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"
	I1014 19:40:33.007342  437269 command_runner.go:130] >       ],
	I1014 19:40:33.007351  437269 command_runner.go:130] >       "size":  "742092",
	I1014 19:40:33.007359  437269 command_runner.go:130] >       "uid":  {
	I1014 19:40:33.007370  437269 command_runner.go:130] >         "value":  "65535"
	I1014 19:40:33.007376  437269 command_runner.go:130] >       },
	I1014 19:40:33.007380  437269 command_runner.go:130] >       "username":  "",
	I1014 19:40:33.007387  437269 command_runner.go:130] >       "pinned":  true
	I1014 19:40:33.007393  437269 command_runner.go:130] >     }
	I1014 19:40:33.007401  437269 command_runner.go:130] >   ]
	I1014 19:40:33.007406  437269 command_runner.go:130] > }
	I1014 19:40:33.007590  437269 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 19:40:33.007603  437269 crio.go:433] Images already preloaded, skipping extraction
	I1014 19:40:33.007661  437269 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 19:40:33.032442  437269 command_runner.go:130] > {
	I1014 19:40:33.032462  437269 command_runner.go:130] >   "images":  [
	I1014 19:40:33.032466  437269 command_runner.go:130] >     {
	I1014 19:40:33.032478  437269 command_runner.go:130] >       "id":  "409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c",
	I1014 19:40:33.032485  437269 command_runner.go:130] >       "repoTags":  [
	I1014 19:40:33.032495  437269 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1014 19:40:33.032501  437269 command_runner.go:130] >       ],
	I1014 19:40:33.032508  437269 command_runner.go:130] >       "repoDigests":  [
	I1014 19:40:33.032519  437269 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1014 19:40:33.032527  437269 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"
	I1014 19:40:33.032534  437269 command_runner.go:130] >       ],
	I1014 19:40:33.032538  437269 command_runner.go:130] >       "size":  "109379124",
	I1014 19:40:33.032542  437269 command_runner.go:130] >       "username":  "",
	I1014 19:40:33.032548  437269 command_runner.go:130] >       "pinned":  false
	I1014 19:40:33.032551  437269 command_runner.go:130] >     },
	I1014 19:40:33.032555  437269 command_runner.go:130] >     {
	I1014 19:40:33.032561  437269 command_runner.go:130] >       "id":  "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1014 19:40:33.032567  437269 command_runner.go:130] >       "repoTags":  [
	I1014 19:40:33.032572  437269 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1014 19:40:33.032575  437269 command_runner.go:130] >       ],
	I1014 19:40:33.032582  437269 command_runner.go:130] >       "repoDigests":  [
	I1014 19:40:33.032591  437269 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1014 19:40:33.032602  437269 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1014 19:40:33.032608  437269 command_runner.go:130] >       ],
	I1014 19:40:33.032612  437269 command_runner.go:130] >       "size":  "31470524",
	I1014 19:40:33.032616  437269 command_runner.go:130] >       "username":  "",
	I1014 19:40:33.032621  437269 command_runner.go:130] >       "pinned":  false
	I1014 19:40:33.032626  437269 command_runner.go:130] >     },
	I1014 19:40:33.032629  437269 command_runner.go:130] >     {
	I1014 19:40:33.032635  437269 command_runner.go:130] >       "id":  "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969",
	I1014 19:40:33.032642  437269 command_runner.go:130] >       "repoTags":  [
	I1014 19:40:33.032647  437269 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.12.1"
	I1014 19:40:33.032652  437269 command_runner.go:130] >       ],
	I1014 19:40:33.032656  437269 command_runner.go:130] >       "repoDigests":  [
	I1014 19:40:33.032665  437269 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998",
	I1014 19:40:33.032675  437269 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"
	I1014 19:40:33.032682  437269 command_runner.go:130] >       ],
	I1014 19:40:33.032686  437269 command_runner.go:130] >       "size":  "76103547",
	I1014 19:40:33.032690  437269 command_runner.go:130] >       "username":  "nonroot",
	I1014 19:40:33.032694  437269 command_runner.go:130] >       "pinned":  false
	I1014 19:40:33.032697  437269 command_runner.go:130] >     },
	I1014 19:40:33.032700  437269 command_runner.go:130] >     {
	I1014 19:40:33.032705  437269 command_runner.go:130] >       "id":  "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115",
	I1014 19:40:33.032709  437269 command_runner.go:130] >       "repoTags":  [
	I1014 19:40:33.032714  437269 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.4-0"
	I1014 19:40:33.032720  437269 command_runner.go:130] >       ],
	I1014 19:40:33.032724  437269 command_runner.go:130] >       "repoDigests":  [
	I1014 19:40:33.032730  437269 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f",
	I1014 19:40:33.032739  437269 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"
	I1014 19:40:33.032743  437269 command_runner.go:130] >       ],
	I1014 19:40:33.032749  437269 command_runner.go:130] >       "size":  "195976448",
	I1014 19:40:33.032772  437269 command_runner.go:130] >       "uid":  {
	I1014 19:40:33.032781  437269 command_runner.go:130] >         "value":  "0"
	I1014 19:40:33.032786  437269 command_runner.go:130] >       },
	I1014 19:40:33.032793  437269 command_runner.go:130] >       "username":  "",
	I1014 19:40:33.032798  437269 command_runner.go:130] >       "pinned":  false
	I1014 19:40:33.032801  437269 command_runner.go:130] >     },
	I1014 19:40:33.032804  437269 command_runner.go:130] >     {
	I1014 19:40:33.032810  437269 command_runner.go:130] >       "id":  "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97",
	I1014 19:40:33.032816  437269 command_runner.go:130] >       "repoTags":  [
	I1014 19:40:33.032821  437269 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.34.1"
	I1014 19:40:33.032827  437269 command_runner.go:130] >       ],
	I1014 19:40:33.032830  437269 command_runner.go:130] >       "repoDigests":  [
	I1014 19:40:33.032837  437269 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964",
	I1014 19:40:33.032847  437269 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"
	I1014 19:40:33.032850  437269 command_runner.go:130] >       ],
	I1014 19:40:33.032858  437269 command_runner.go:130] >       "size":  "89046001",
	I1014 19:40:33.032862  437269 command_runner.go:130] >       "uid":  {
	I1014 19:40:33.032866  437269 command_runner.go:130] >         "value":  "0"
	I1014 19:40:33.032869  437269 command_runner.go:130] >       },
	I1014 19:40:33.032873  437269 command_runner.go:130] >       "username":  "",
	I1014 19:40:33.032877  437269 command_runner.go:130] >       "pinned":  false
	I1014 19:40:33.032880  437269 command_runner.go:130] >     },
	I1014 19:40:33.032883  437269 command_runner.go:130] >     {
	I1014 19:40:33.032889  437269 command_runner.go:130] >       "id":  "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f",
	I1014 19:40:33.032895  437269 command_runner.go:130] >       "repoTags":  [
	I1014 19:40:33.032901  437269 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.34.1"
	I1014 19:40:33.032906  437269 command_runner.go:130] >       ],
	I1014 19:40:33.032910  437269 command_runner.go:130] >       "repoDigests":  [
	I1014 19:40:33.032917  437269 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89",
	I1014 19:40:33.032935  437269 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"
	I1014 19:40:33.032940  437269 command_runner.go:130] >       ],
	I1014 19:40:33.032944  437269 command_runner.go:130] >       "size":  "76004181",
	I1014 19:40:33.032948  437269 command_runner.go:130] >       "uid":  {
	I1014 19:40:33.032955  437269 command_runner.go:130] >         "value":  "0"
	I1014 19:40:33.032958  437269 command_runner.go:130] >       },
	I1014 19:40:33.032963  437269 command_runner.go:130] >       "username":  "",
	I1014 19:40:33.032969  437269 command_runner.go:130] >       "pinned":  false
	I1014 19:40:33.032973  437269 command_runner.go:130] >     },
	I1014 19:40:33.032976  437269 command_runner.go:130] >     {
	I1014 19:40:33.032981  437269 command_runner.go:130] >       "id":  "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7",
	I1014 19:40:33.032986  437269 command_runner.go:130] >       "repoTags":  [
	I1014 19:40:33.032990  437269 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.34.1"
	I1014 19:40:33.032996  437269 command_runner.go:130] >       ],
	I1014 19:40:33.033000  437269 command_runner.go:130] >       "repoDigests":  [
	I1014 19:40:33.033009  437269 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a",
	I1014 19:40:33.033018  437269 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"
	I1014 19:40:33.033023  437269 command_runner.go:130] >       ],
	I1014 19:40:33.033027  437269 command_runner.go:130] >       "size":  "73138073",
	I1014 19:40:33.033033  437269 command_runner.go:130] >       "username":  "",
	I1014 19:40:33.033037  437269 command_runner.go:130] >       "pinned":  false
	I1014 19:40:33.033042  437269 command_runner.go:130] >     },
	I1014 19:40:33.033045  437269 command_runner.go:130] >     {
	I1014 19:40:33.033051  437269 command_runner.go:130] >       "id":  "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813",
	I1014 19:40:33.033055  437269 command_runner.go:130] >       "repoTags":  [
	I1014 19:40:33.033059  437269 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.34.1"
	I1014 19:40:33.033062  437269 command_runner.go:130] >       ],
	I1014 19:40:33.033066  437269 command_runner.go:130] >       "repoDigests":  [
	I1014 19:40:33.033073  437269 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31",
	I1014 19:40:33.033115  437269 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"
	I1014 19:40:33.033125  437269 command_runner.go:130] >       ],
	I1014 19:40:33.033129  437269 command_runner.go:130] >       "size":  "53844823",
	I1014 19:40:33.033133  437269 command_runner.go:130] >       "uid":  {
	I1014 19:40:33.033139  437269 command_runner.go:130] >         "value":  "0"
	I1014 19:40:33.033142  437269 command_runner.go:130] >       },
	I1014 19:40:33.033146  437269 command_runner.go:130] >       "username":  "",
	I1014 19:40:33.033150  437269 command_runner.go:130] >       "pinned":  false
	I1014 19:40:33.033153  437269 command_runner.go:130] >     },
	I1014 19:40:33.033157  437269 command_runner.go:130] >     {
	I1014 19:40:33.033166  437269 command_runner.go:130] >       "id":  "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f",
	I1014 19:40:33.033170  437269 command_runner.go:130] >       "repoTags":  [
	I1014 19:40:33.033175  437269 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1014 19:40:33.033180  437269 command_runner.go:130] >       ],
	I1014 19:40:33.033184  437269 command_runner.go:130] >       "repoDigests":  [
	I1014 19:40:33.033194  437269 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1014 19:40:33.033201  437269 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"
	I1014 19:40:33.033207  437269 command_runner.go:130] >       ],
	I1014 19:40:33.033210  437269 command_runner.go:130] >       "size":  "742092",
	I1014 19:40:33.033214  437269 command_runner.go:130] >       "uid":  {
	I1014 19:40:33.033217  437269 command_runner.go:130] >         "value":  "65535"
	I1014 19:40:33.033221  437269 command_runner.go:130] >       },
	I1014 19:40:33.033227  437269 command_runner.go:130] >       "username":  "",
	I1014 19:40:33.033231  437269 command_runner.go:130] >       "pinned":  true
	I1014 19:40:33.033234  437269 command_runner.go:130] >     }
	I1014 19:40:33.033237  437269 command_runner.go:130] >   ]
	I1014 19:40:33.033243  437269 command_runner.go:130] > }
	I1014 19:40:33.033339  437269 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 19:40:33.033350  437269 cache_images.go:85] Images are preloaded, skipping loading
	I1014 19:40:33.033357  437269 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1014 19:40:33.033466  437269 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-744288 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-744288 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 19:40:33.033525  437269 ssh_runner.go:195] Run: crio config
	I1014 19:40:33.060289  437269 command_runner.go:130] ! time="2025-10-14T19:40:33.059904069Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1014 19:40:33.060322  437269 command_runner.go:130] ! time="2025-10-14T19:40:33.059934761Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1014 19:40:33.060333  437269 command_runner.go:130] ! time="2025-10-14T19:40:33.05995717Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1014 19:40:33.060344  437269 command_runner.go:130] ! time="2025-10-14T19:40:33.059977069Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1014 19:40:33.060356  437269 command_runner.go:130] ! time="2025-10-14T19:40:33.060036887Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1014 19:40:33.060415  437269 command_runner.go:130] ! time="2025-10-14T19:40:33.060204237Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1014 19:40:33.072518  437269 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1014 19:40:33.078451  437269 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1014 19:40:33.078471  437269 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1014 19:40:33.078478  437269 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1014 19:40:33.078485  437269 command_runner.go:130] > #
	I1014 19:40:33.078491  437269 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1014 19:40:33.078497  437269 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1014 19:40:33.078504  437269 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1014 19:40:33.078513  437269 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1014 19:40:33.078518  437269 command_runner.go:130] > # reload'.
	I1014 19:40:33.078524  437269 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1014 19:40:33.078533  437269 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1014 19:40:33.078539  437269 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1014 19:40:33.078545  437269 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1014 19:40:33.078551  437269 command_runner.go:130] > [crio]
	I1014 19:40:33.078557  437269 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1014 19:40:33.078564  437269 command_runner.go:130] > # containers images, in this directory.
	I1014 19:40:33.078572  437269 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1014 19:40:33.078580  437269 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1014 19:40:33.078585  437269 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1014 19:40:33.078594  437269 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1014 19:40:33.078601  437269 command_runner.go:130] > # imagestore = ""
	I1014 19:40:33.078607  437269 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1014 19:40:33.078615  437269 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1014 19:40:33.078620  437269 command_runner.go:130] > # storage_driver = "overlay"
	I1014 19:40:33.078625  437269 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1014 19:40:33.078633  437269 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1014 19:40:33.078637  437269 command_runner.go:130] > # storage_option = [
	I1014 19:40:33.078642  437269 command_runner.go:130] > # ]
	I1014 19:40:33.078648  437269 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1014 19:40:33.078656  437269 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1014 19:40:33.078660  437269 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1014 19:40:33.078667  437269 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1014 19:40:33.078673  437269 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1014 19:40:33.078690  437269 command_runner.go:130] > # always happen on a node reboot
	I1014 19:40:33.078695  437269 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1014 19:40:33.078703  437269 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1014 19:40:33.078709  437269 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1014 19:40:33.078716  437269 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1014 19:40:33.078720  437269 command_runner.go:130] > # version_file_persist = ""
	I1014 19:40:33.078729  437269 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1014 19:40:33.078739  437269 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1014 19:40:33.078745  437269 command_runner.go:130] > # internal_wipe = true
	I1014 19:40:33.078771  437269 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1014 19:40:33.078784  437269 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1014 19:40:33.078790  437269 command_runner.go:130] > # internal_repair = true
	I1014 19:40:33.078798  437269 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1014 19:40:33.078804  437269 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1014 19:40:33.078816  437269 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1014 19:40:33.078823  437269 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1014 19:40:33.078829  437269 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1014 19:40:33.078834  437269 command_runner.go:130] > [crio.api]
	I1014 19:40:33.078839  437269 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1014 19:40:33.078846  437269 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1014 19:40:33.078851  437269 command_runner.go:130] > # IP address on which the stream server will listen.
	I1014 19:40:33.078858  437269 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1014 19:40:33.078864  437269 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1014 19:40:33.078871  437269 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1014 19:40:33.078875  437269 command_runner.go:130] > # stream_port = "0"
	I1014 19:40:33.078881  437269 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1014 19:40:33.078885  437269 command_runner.go:130] > # stream_enable_tls = false
	I1014 19:40:33.078893  437269 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1014 19:40:33.078897  437269 command_runner.go:130] > # stream_idle_timeout = ""
	I1014 19:40:33.078904  437269 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1014 19:40:33.078912  437269 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1014 19:40:33.078916  437269 command_runner.go:130] > # stream_tls_cert = ""
	I1014 19:40:33.078924  437269 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1014 19:40:33.078931  437269 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1014 19:40:33.078936  437269 command_runner.go:130] > # stream_tls_key = ""
	I1014 19:40:33.078941  437269 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1014 19:40:33.078949  437269 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1014 19:40:33.078954  437269 command_runner.go:130] > # automatically pick up the changes.
	I1014 19:40:33.078960  437269 command_runner.go:130] > # stream_tls_ca = ""
	I1014 19:40:33.078977  437269 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1014 19:40:33.078984  437269 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1014 19:40:33.078991  437269 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1014 19:40:33.078998  437269 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1014 19:40:33.079004  437269 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1014 19:40:33.079011  437269 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1014 19:40:33.079015  437269 command_runner.go:130] > [crio.runtime]
	I1014 19:40:33.079021  437269 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1014 19:40:33.079028  437269 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1014 19:40:33.079032  437269 command_runner.go:130] > # "nofile=1024:2048"
	I1014 19:40:33.079040  437269 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1014 19:40:33.079046  437269 command_runner.go:130] > # default_ulimits = [
	I1014 19:40:33.079049  437269 command_runner.go:130] > # ]
	I1014 19:40:33.079054  437269 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1014 19:40:33.079060  437269 command_runner.go:130] > # no_pivot = false
	I1014 19:40:33.079065  437269 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1014 19:40:33.079073  437269 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1014 19:40:33.079078  437269 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1014 19:40:33.079086  437269 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1014 19:40:33.079090  437269 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1014 19:40:33.079099  437269 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1014 19:40:33.079105  437269 command_runner.go:130] > # conmon = ""
	I1014 19:40:33.079109  437269 command_runner.go:130] > # Cgroup setting for conmon
	I1014 19:40:33.079117  437269 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1014 19:40:33.079123  437269 command_runner.go:130] > conmon_cgroup = "pod"
	I1014 19:40:33.079129  437269 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1014 19:40:33.079136  437269 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1014 19:40:33.079142  437269 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1014 19:40:33.079147  437269 command_runner.go:130] > # conmon_env = [
	I1014 19:40:33.079150  437269 command_runner.go:130] > # ]
	I1014 19:40:33.079155  437269 command_runner.go:130] > # Additional environment variables to set for all the
	I1014 19:40:33.079163  437269 command_runner.go:130] > # containers. These are overridden if set in the
	I1014 19:40:33.079169  437269 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1014 19:40:33.079175  437269 command_runner.go:130] > # default_env = [
	I1014 19:40:33.079177  437269 command_runner.go:130] > # ]
	I1014 19:40:33.079183  437269 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1014 19:40:33.079192  437269 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1014 19:40:33.079198  437269 command_runner.go:130] > # selinux = false
	I1014 19:40:33.079204  437269 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1014 19:40:33.079210  437269 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1014 19:40:33.079219  437269 command_runner.go:130] > # This option supports live configuration reload.
	I1014 19:40:33.079225  437269 command_runner.go:130] > # seccomp_profile = ""
	I1014 19:40:33.079231  437269 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1014 19:40:33.079237  437269 command_runner.go:130] > # This option supports live configuration reload.
	I1014 19:40:33.079242  437269 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1014 19:40:33.079250  437269 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1014 19:40:33.079258  437269 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1014 19:40:33.079264  437269 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1014 19:40:33.079273  437269 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1014 19:40:33.079279  437269 command_runner.go:130] > # This option supports live configuration reload.
	I1014 19:40:33.079284  437269 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1014 19:40:33.079291  437269 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1014 19:40:33.079295  437269 command_runner.go:130] > # the cgroup blockio controller.
	I1014 19:40:33.079301  437269 command_runner.go:130] > # blockio_config_file = ""
	I1014 19:40:33.079308  437269 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1014 19:40:33.079314  437269 command_runner.go:130] > # blockio parameters.
	I1014 19:40:33.079317  437269 command_runner.go:130] > # blockio_reload = false
	I1014 19:40:33.079325  437269 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1014 19:40:33.079329  437269 command_runner.go:130] > # irqbalance daemon.
	I1014 19:40:33.079336  437269 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1014 19:40:33.079342  437269 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1014 19:40:33.079351  437269 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1014 19:40:33.079360  437269 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1014 19:40:33.079367  437269 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1014 19:40:33.079374  437269 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1014 19:40:33.079380  437269 command_runner.go:130] > # This option supports live configuration reload.
	I1014 19:40:33.079385  437269 command_runner.go:130] > # rdt_config_file = ""
	I1014 19:40:33.079393  437269 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1014 19:40:33.079396  437269 command_runner.go:130] > # cgroup_manager = "systemd"
	I1014 19:40:33.079402  437269 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1014 19:40:33.079407  437269 command_runner.go:130] > # separate_pull_cgroup = ""
	I1014 19:40:33.079413  437269 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1014 19:40:33.079421  437269 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1014 19:40:33.079427  437269 command_runner.go:130] > # will be added.
	I1014 19:40:33.079430  437269 command_runner.go:130] > # default_capabilities = [
	I1014 19:40:33.079433  437269 command_runner.go:130] > # 	"CHOWN",
	I1014 19:40:33.079439  437269 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1014 19:40:33.079442  437269 command_runner.go:130] > # 	"FSETID",
	I1014 19:40:33.079445  437269 command_runner.go:130] > # 	"FOWNER",
	I1014 19:40:33.079451  437269 command_runner.go:130] > # 	"SETGID",
	I1014 19:40:33.079466  437269 command_runner.go:130] > # 	"SETUID",
	I1014 19:40:33.079472  437269 command_runner.go:130] > # 	"SETPCAP",
	I1014 19:40:33.079475  437269 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1014 19:40:33.079480  437269 command_runner.go:130] > # 	"KILL",
	I1014 19:40:33.079484  437269 command_runner.go:130] > # ]
	I1014 19:40:33.079493  437269 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1014 19:40:33.079501  437269 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1014 19:40:33.079508  437269 command_runner.go:130] > # add_inheritable_capabilities = false
	I1014 19:40:33.079514  437269 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1014 19:40:33.079522  437269 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1014 19:40:33.079526  437269 command_runner.go:130] > default_sysctls = [
	I1014 19:40:33.079530  437269 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1014 19:40:33.079536  437269 command_runner.go:130] > ]
	I1014 19:40:33.079540  437269 command_runner.go:130] > # List of devices on the host that a
	I1014 19:40:33.079548  437269 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1014 19:40:33.079553  437269 command_runner.go:130] > # allowed_devices = [
	I1014 19:40:33.079557  437269 command_runner.go:130] > # 	"/dev/fuse",
	I1014 19:40:33.079563  437269 command_runner.go:130] > # 	"/dev/net/tun",
	I1014 19:40:33.079566  437269 command_runner.go:130] > # ]
	I1014 19:40:33.079574  437269 command_runner.go:130] > # List of additional devices. specified as
	I1014 19:40:33.079581  437269 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1014 19:40:33.079588  437269 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1014 19:40:33.079595  437269 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1014 19:40:33.079601  437269 command_runner.go:130] > # additional_devices = [
	I1014 19:40:33.079604  437269 command_runner.go:130] > # ]
	I1014 19:40:33.079611  437269 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1014 19:40:33.079615  437269 command_runner.go:130] > # cdi_spec_dirs = [
	I1014 19:40:33.079619  437269 command_runner.go:130] > # 	"/etc/cdi",
	I1014 19:40:33.079625  437269 command_runner.go:130] > # 	"/var/run/cdi",
	I1014 19:40:33.079628  437269 command_runner.go:130] > # ]
	I1014 19:40:33.079633  437269 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1014 19:40:33.079641  437269 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1014 19:40:33.079645  437269 command_runner.go:130] > # Defaults to false.
	I1014 19:40:33.079652  437269 command_runner.go:130] > # device_ownership_from_security_context = false
	I1014 19:40:33.079659  437269 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1014 19:40:33.079666  437269 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1014 19:40:33.079670  437269 command_runner.go:130] > # hooks_dir = [
	I1014 19:40:33.079682  437269 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1014 19:40:33.079687  437269 command_runner.go:130] > # ]
	I1014 19:40:33.079693  437269 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1014 19:40:33.079701  437269 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1014 19:40:33.079706  437269 command_runner.go:130] > # its default mounts from the following two files:
	I1014 19:40:33.079712  437269 command_runner.go:130] > #
	I1014 19:40:33.079718  437269 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1014 19:40:33.079726  437269 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1014 19:40:33.079734  437269 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1014 19:40:33.079737  437269 command_runner.go:130] > #
	I1014 19:40:33.079743  437269 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1014 19:40:33.079751  437269 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1014 19:40:33.079780  437269 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1014 19:40:33.079788  437269 command_runner.go:130] > #      only add mounts it finds in this file.
	I1014 19:40:33.079791  437269 command_runner.go:130] > #
	I1014 19:40:33.079797  437269 command_runner.go:130] > # default_mounts_file = ""
	I1014 19:40:33.079804  437269 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1014 19:40:33.079811  437269 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1014 19:40:33.079816  437269 command_runner.go:130] > # pids_limit = -1
	I1014 19:40:33.079822  437269 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1014 19:40:33.079830  437269 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1014 19:40:33.079839  437269 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1014 19:40:33.079846  437269 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1014 19:40:33.079852  437269 command_runner.go:130] > # log_size_max = -1
	I1014 19:40:33.079858  437269 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1014 19:40:33.079864  437269 command_runner.go:130] > # log_to_journald = false
	I1014 19:40:33.079870  437269 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1014 19:40:33.079878  437269 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1014 19:40:33.079883  437269 command_runner.go:130] > # Path to directory for container attach sockets.
	I1014 19:40:33.079890  437269 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1014 19:40:33.079895  437269 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1014 19:40:33.079901  437269 command_runner.go:130] > # bind_mount_prefix = ""
	I1014 19:40:33.079906  437269 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1014 19:40:33.079912  437269 command_runner.go:130] > # read_only = false
	I1014 19:40:33.079917  437269 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1014 19:40:33.079926  437269 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1014 19:40:33.079933  437269 command_runner.go:130] > # live configuration reload.
	I1014 19:40:33.079937  437269 command_runner.go:130] > # log_level = "info"
	I1014 19:40:33.079942  437269 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1014 19:40:33.079950  437269 command_runner.go:130] > # This option supports live configuration reload.
	I1014 19:40:33.079953  437269 command_runner.go:130] > # log_filter = ""
	I1014 19:40:33.079959  437269 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1014 19:40:33.079967  437269 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1014 19:40:33.079970  437269 command_runner.go:130] > # separated by comma.
	I1014 19:40:33.079978  437269 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1014 19:40:33.079983  437269 command_runner.go:130] > # uid_mappings = ""
	I1014 19:40:33.079989  437269 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1014 19:40:33.079997  437269 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1014 19:40:33.080005  437269 command_runner.go:130] > # separated by comma.
	I1014 19:40:33.080014  437269 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1014 19:40:33.080020  437269 command_runner.go:130] > # gid_mappings = ""
	I1014 19:40:33.080026  437269 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1014 19:40:33.080035  437269 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1014 19:40:33.080043  437269 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1014 19:40:33.080049  437269 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1014 19:40:33.080055  437269 command_runner.go:130] > # minimum_mappable_uid = -1
	I1014 19:40:33.080061  437269 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1014 19:40:33.080069  437269 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1014 19:40:33.080075  437269 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1014 19:40:33.080085  437269 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1014 19:40:33.080090  437269 command_runner.go:130] > # minimum_mappable_gid = -1
	I1014 19:40:33.080096  437269 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1014 19:40:33.080112  437269 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1014 19:40:33.080120  437269 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1014 19:40:33.080124  437269 command_runner.go:130] > # ctr_stop_timeout = 30
	I1014 19:40:33.080131  437269 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1014 19:40:33.080138  437269 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1014 19:40:33.080144  437269 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1014 19:40:33.080149  437269 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1014 19:40:33.080155  437269 command_runner.go:130] > # drop_infra_ctr = true
	I1014 19:40:33.080160  437269 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1014 19:40:33.080168  437269 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1014 19:40:33.080175  437269 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1014 19:40:33.080181  437269 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1014 19:40:33.080188  437269 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1014 19:40:33.080195  437269 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1014 19:40:33.080200  437269 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1014 19:40:33.080207  437269 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1014 19:40:33.080211  437269 command_runner.go:130] > # shared_cpuset = ""
	I1014 19:40:33.080219  437269 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1014 19:40:33.080223  437269 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1014 19:40:33.080230  437269 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1014 19:40:33.080237  437269 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1014 19:40:33.080243  437269 command_runner.go:130] > # pinns_path = ""
	I1014 19:40:33.080249  437269 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1014 19:40:33.080256  437269 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1014 19:40:33.080261  437269 command_runner.go:130] > # enable_criu_support = true
	I1014 19:40:33.080268  437269 command_runner.go:130] > # Enable/disable the generation of the container,
	I1014 19:40:33.080273  437269 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1014 19:40:33.080280  437269 command_runner.go:130] > # enable_pod_events = false
	I1014 19:40:33.080285  437269 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1014 19:40:33.080292  437269 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1014 19:40:33.080296  437269 command_runner.go:130] > # default_runtime = "crun"
	I1014 19:40:33.080301  437269 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1014 19:40:33.080310  437269 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1014 19:40:33.080320  437269 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1014 19:40:33.080325  437269 command_runner.go:130] > # creation as a file is not desired either.
	I1014 19:40:33.080336  437269 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1014 19:40:33.080342  437269 command_runner.go:130] > # the hostname is being managed dynamically.
	I1014 19:40:33.080346  437269 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1014 19:40:33.080352  437269 command_runner.go:130] > # ]
	I1014 19:40:33.080357  437269 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1014 19:40:33.080365  437269 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1014 19:40:33.080373  437269 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1014 19:40:33.080378  437269 command_runner.go:130] > # Each entry in the table should follow the format:
	I1014 19:40:33.080382  437269 command_runner.go:130] > #
	I1014 19:40:33.080387  437269 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1014 19:40:33.080394  437269 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1014 19:40:33.080397  437269 command_runner.go:130] > # runtime_type = "oci"
	I1014 19:40:33.080404  437269 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1014 19:40:33.080408  437269 command_runner.go:130] > # inherit_default_runtime = false
	I1014 19:40:33.080413  437269 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1014 19:40:33.080419  437269 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1014 19:40:33.080424  437269 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1014 19:40:33.080430  437269 command_runner.go:130] > # monitor_env = []
	I1014 19:40:33.080435  437269 command_runner.go:130] > # privileged_without_host_devices = false
	I1014 19:40:33.080440  437269 command_runner.go:130] > # allowed_annotations = []
	I1014 19:40:33.080445  437269 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1014 19:40:33.080451  437269 command_runner.go:130] > # no_sync_log = false
	I1014 19:40:33.080455  437269 command_runner.go:130] > # default_annotations = {}
	I1014 19:40:33.080461  437269 command_runner.go:130] > # stream_websockets = false
	I1014 19:40:33.080465  437269 command_runner.go:130] > # seccomp_profile = ""
	I1014 19:40:33.080487  437269 command_runner.go:130] > # Where:
	I1014 19:40:33.080494  437269 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1014 19:40:33.080500  437269 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1014 19:40:33.080508  437269 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1014 19:40:33.080514  437269 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1014 19:40:33.080519  437269 command_runner.go:130] > #   in $PATH.
	I1014 19:40:33.080525  437269 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1014 19:40:33.080532  437269 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1014 19:40:33.080538  437269 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1014 19:40:33.080543  437269 command_runner.go:130] > #   state.
	I1014 19:40:33.080552  437269 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1014 19:40:33.080560  437269 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1014 19:40:33.080565  437269 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1014 19:40:33.080573  437269 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1014 19:40:33.080578  437269 command_runner.go:130] > #   the values from the default runtime on load time.
	I1014 19:40:33.080586  437269 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1014 19:40:33.080591  437269 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1014 19:40:33.080599  437269 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1014 19:40:33.080605  437269 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1014 19:40:33.080612  437269 command_runner.go:130] > #   The currently recognized values are:
	I1014 19:40:33.080618  437269 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1014 19:40:33.080627  437269 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1014 19:40:33.080636  437269 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1014 19:40:33.080641  437269 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1014 19:40:33.080651  437269 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1014 19:40:33.080660  437269 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1014 19:40:33.080669  437269 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1014 19:40:33.080680  437269 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1014 19:40:33.080687  437269 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1014 19:40:33.080693  437269 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1014 19:40:33.080702  437269 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1014 19:40:33.080710  437269 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1014 19:40:33.080715  437269 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1014 19:40:33.080724  437269 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1014 19:40:33.080732  437269 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1014 19:40:33.080738  437269 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1014 19:40:33.080747  437269 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1014 19:40:33.080751  437269 command_runner.go:130] > #   deprecated option "conmon".
	I1014 19:40:33.080773  437269 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1014 19:40:33.080783  437269 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1014 19:40:33.080796  437269 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1014 19:40:33.080803  437269 command_runner.go:130] > #   should be moved to the container's cgroup
	I1014 19:40:33.080810  437269 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1014 19:40:33.080817  437269 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1014 19:40:33.080824  437269 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1014 19:40:33.080830  437269 command_runner.go:130] > #   conmon-rs by using:
	I1014 19:40:33.080837  437269 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1014 19:40:33.080847  437269 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1014 19:40:33.080857  437269 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1014 19:40:33.080865  437269 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1014 19:40:33.080872  437269 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1014 19:40:33.080879  437269 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1014 19:40:33.080888  437269 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1014 19:40:33.080894  437269 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1014 19:40:33.080904  437269 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1014 19:40:33.080915  437269 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1014 19:40:33.080921  437269 command_runner.go:130] > #   when a machine crash happens.
	I1014 19:40:33.080929  437269 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1014 19:40:33.080939  437269 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1014 19:40:33.080949  437269 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1014 19:40:33.080955  437269 command_runner.go:130] > #   seccomp profile for the runtime.
	I1014 19:40:33.080961  437269 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1014 19:40:33.080970  437269 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1014 19:40:33.080975  437269 command_runner.go:130] > #
	I1014 19:40:33.080980  437269 command_runner.go:130] > # Using the seccomp notifier feature:
	I1014 19:40:33.080985  437269 command_runner.go:130] > #
	I1014 19:40:33.080991  437269 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1014 19:40:33.080998  437269 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1014 19:40:33.081002  437269 command_runner.go:130] > #
	I1014 19:40:33.081007  437269 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1014 19:40:33.081015  437269 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1014 19:40:33.081020  437269 command_runner.go:130] > #
	I1014 19:40:33.081026  437269 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1014 19:40:33.081032  437269 command_runner.go:130] > # feature.
	I1014 19:40:33.081035  437269 command_runner.go:130] > #
	I1014 19:40:33.081042  437269 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1014 19:40:33.081048  437269 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1014 19:40:33.081057  437269 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1014 19:40:33.081062  437269 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1014 19:40:33.081070  437269 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1014 19:40:33.081073  437269 command_runner.go:130] > #
	I1014 19:40:33.081079  437269 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1014 19:40:33.081087  437269 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1014 19:40:33.081090  437269 command_runner.go:130] > #
	I1014 19:40:33.081096  437269 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1014 19:40:33.081103  437269 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1014 19:40:33.081106  437269 command_runner.go:130] > #
	I1014 19:40:33.081112  437269 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1014 19:40:33.081119  437269 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1014 19:40:33.081122  437269 command_runner.go:130] > # limitation.
	I1014 19:40:33.081129  437269 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1014 19:40:33.081138  437269 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1014 19:40:33.081143  437269 command_runner.go:130] > runtime_type = ""
	I1014 19:40:33.081147  437269 command_runner.go:130] > runtime_root = "/run/crun"
	I1014 19:40:33.081151  437269 command_runner.go:130] > inherit_default_runtime = false
	I1014 19:40:33.081157  437269 command_runner.go:130] > runtime_config_path = ""
	I1014 19:40:33.081161  437269 command_runner.go:130] > container_min_memory = ""
	I1014 19:40:33.081167  437269 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1014 19:40:33.081171  437269 command_runner.go:130] > monitor_cgroup = "pod"
	I1014 19:40:33.081177  437269 command_runner.go:130] > monitor_exec_cgroup = ""
	I1014 19:40:33.081181  437269 command_runner.go:130] > allowed_annotations = [
	I1014 19:40:33.081187  437269 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1014 19:40:33.081190  437269 command_runner.go:130] > ]
	I1014 19:40:33.081197  437269 command_runner.go:130] > privileged_without_host_devices = false
	I1014 19:40:33.081201  437269 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1014 19:40:33.081208  437269 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1014 19:40:33.081212  437269 command_runner.go:130] > runtime_type = ""
	I1014 19:40:33.081218  437269 command_runner.go:130] > runtime_root = "/run/runc"
	I1014 19:40:33.081222  437269 command_runner.go:130] > inherit_default_runtime = false
	I1014 19:40:33.081229  437269 command_runner.go:130] > runtime_config_path = ""
	I1014 19:40:33.081234  437269 command_runner.go:130] > container_min_memory = ""
	I1014 19:40:33.081241  437269 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1014 19:40:33.081245  437269 command_runner.go:130] > monitor_cgroup = "pod"
	I1014 19:40:33.081251  437269 command_runner.go:130] > monitor_exec_cgroup = ""
	I1014 19:40:33.081256  437269 command_runner.go:130] > privileged_without_host_devices = false
	I1014 19:40:33.081264  437269 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1014 19:40:33.081271  437269 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1014 19:40:33.081277  437269 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1014 19:40:33.081286  437269 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1014 19:40:33.081298  437269 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1014 19:40:33.081309  437269 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1014 19:40:33.081318  437269 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1014 19:40:33.081324  437269 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1014 19:40:33.081335  437269 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1014 19:40:33.081345  437269 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1014 19:40:33.081353  437269 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1014 19:40:33.081359  437269 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1014 19:40:33.081365  437269 command_runner.go:130] > # Example:
	I1014 19:40:33.081369  437269 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1014 19:40:33.081375  437269 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1014 19:40:33.081380  437269 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1014 19:40:33.081389  437269 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1014 19:40:33.081395  437269 command_runner.go:130] > # cpuset = "0-1"
	I1014 19:40:33.081399  437269 command_runner.go:130] > # cpushares = "5"
	I1014 19:40:33.081405  437269 command_runner.go:130] > # cpuquota = "1000"
	I1014 19:40:33.081408  437269 command_runner.go:130] > # cpuperiod = "100000"
	I1014 19:40:33.081412  437269 command_runner.go:130] > # cpulimit = "35"
	I1014 19:40:33.081417  437269 command_runner.go:130] > # Where:
	I1014 19:40:33.081421  437269 command_runner.go:130] > # The workload name is workload-type.
	I1014 19:40:33.081430  437269 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1014 19:40:33.081438  437269 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1014 19:40:33.081443  437269 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1014 19:40:33.081453  437269 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1014 19:40:33.081470  437269 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1014 19:40:33.081477  437269 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1014 19:40:33.081484  437269 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1014 19:40:33.081490  437269 command_runner.go:130] > # Default value is set to true
	I1014 19:40:33.081494  437269 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1014 19:40:33.081499  437269 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1014 19:40:33.081505  437269 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1014 19:40:33.081510  437269 command_runner.go:130] > # Default value is set to 'false'
	I1014 19:40:33.081516  437269 command_runner.go:130] > # disable_hostport_mapping = false
	I1014 19:40:33.081522  437269 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1014 19:40:33.081531  437269 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1014 19:40:33.081537  437269 command_runner.go:130] > # timezone = ""
	I1014 19:40:33.081543  437269 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1014 19:40:33.081549  437269 command_runner.go:130] > #
	I1014 19:40:33.081555  437269 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1014 19:40:33.081563  437269 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1014 19:40:33.081567  437269 command_runner.go:130] > [crio.image]
	I1014 19:40:33.081575  437269 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1014 19:40:33.081579  437269 command_runner.go:130] > # default_transport = "docker://"
	I1014 19:40:33.081585  437269 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1014 19:40:33.081593  437269 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1014 19:40:33.081597  437269 command_runner.go:130] > # global_auth_file = ""
	I1014 19:40:33.081604  437269 command_runner.go:130] > # The image used to instantiate infra containers.
	I1014 19:40:33.081609  437269 command_runner.go:130] > # This option supports live configuration reload.
	I1014 19:40:33.081616  437269 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1014 19:40:33.081622  437269 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1014 19:40:33.081630  437269 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1014 19:40:33.081634  437269 command_runner.go:130] > # This option supports live configuration reload.
	I1014 19:40:33.081639  437269 command_runner.go:130] > # pause_image_auth_file = ""
	I1014 19:40:33.081645  437269 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1014 19:40:33.081653  437269 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1014 19:40:33.081658  437269 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1014 19:40:33.081666  437269 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1014 19:40:33.081671  437269 command_runner.go:130] > # pause_command = "/pause"
	I1014 19:40:33.081682  437269 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1014 19:40:33.081690  437269 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1014 19:40:33.081695  437269 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1014 19:40:33.081703  437269 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1014 19:40:33.081709  437269 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1014 19:40:33.081717  437269 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1014 19:40:33.081723  437269 command_runner.go:130] > # pinned_images = [
	I1014 19:40:33.081725  437269 command_runner.go:130] > # ]
	I1014 19:40:33.081731  437269 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1014 19:40:33.081739  437269 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1014 19:40:33.081745  437269 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1014 19:40:33.081762  437269 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1014 19:40:33.081774  437269 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1014 19:40:33.081781  437269 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1014 19:40:33.081789  437269 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1014 19:40:33.081795  437269 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1014 19:40:33.081804  437269 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1014 19:40:33.081813  437269 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1014 19:40:33.081822  437269 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1014 19:40:33.081833  437269 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1014 19:40:33.081841  437269 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1014 19:40:33.081847  437269 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1014 19:40:33.081853  437269 command_runner.go:130] > # changing them here.
	I1014 19:40:33.081859  437269 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1014 19:40:33.081865  437269 command_runner.go:130] > # insecure_registries = [
	I1014 19:40:33.081868  437269 command_runner.go:130] > # ]
	I1014 19:40:33.081877  437269 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1014 19:40:33.081887  437269 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1014 19:40:33.081893  437269 command_runner.go:130] > # image_volumes = "mkdir"
	I1014 19:40:33.081898  437269 command_runner.go:130] > # Temporary directory to use for storing big files
	I1014 19:40:33.081904  437269 command_runner.go:130] > # big_files_temporary_dir = ""
	I1014 19:40:33.081910  437269 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1014 19:40:33.081918  437269 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1014 19:40:33.081925  437269 command_runner.go:130] > # auto_reload_registries = false
	I1014 19:40:33.081932  437269 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1014 19:40:33.081940  437269 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1014 19:40:33.081947  437269 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1014 19:40:33.081951  437269 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1014 19:40:33.081958  437269 command_runner.go:130] > # The mode of short name resolution.
	I1014 19:40:33.081966  437269 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1014 19:40:33.081977  437269 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1014 19:40:33.081984  437269 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1014 19:40:33.081989  437269 command_runner.go:130] > # short_name_mode = "enforcing"
	I1014 19:40:33.081997  437269 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1014 19:40:33.082002  437269 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1014 19:40:33.082009  437269 command_runner.go:130] > # oci_artifact_mount_support = true
	I1014 19:40:33.082015  437269 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1014 19:40:33.082021  437269 command_runner.go:130] > # CNI plugins.
	I1014 19:40:33.082025  437269 command_runner.go:130] > [crio.network]
	I1014 19:40:33.082033  437269 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1014 19:40:33.082040  437269 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1014 19:40:33.082044  437269 command_runner.go:130] > # cni_default_network = ""
	I1014 19:40:33.082052  437269 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1014 19:40:33.082056  437269 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1014 19:40:33.082064  437269 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1014 19:40:33.082068  437269 command_runner.go:130] > # plugin_dirs = [
	I1014 19:40:33.082071  437269 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1014 19:40:33.082074  437269 command_runner.go:130] > # ]
	I1014 19:40:33.082078  437269 command_runner.go:130] > # List of included pod metrics.
	I1014 19:40:33.082082  437269 command_runner.go:130] > # included_pod_metrics = [
	I1014 19:40:33.082085  437269 command_runner.go:130] > # ]
	I1014 19:40:33.082089  437269 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1014 19:40:33.082092  437269 command_runner.go:130] > [crio.metrics]
	I1014 19:40:33.082097  437269 command_runner.go:130] > # Globally enable or disable metrics support.
	I1014 19:40:33.082100  437269 command_runner.go:130] > # enable_metrics = false
	I1014 19:40:33.082104  437269 command_runner.go:130] > # Specify enabled metrics collectors.
	I1014 19:40:33.082108  437269 command_runner.go:130] > # Per default all metrics are enabled.
	I1014 19:40:33.082114  437269 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1014 19:40:33.082119  437269 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1014 19:40:33.082124  437269 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1014 19:40:33.082128  437269 command_runner.go:130] > # metrics_collectors = [
	I1014 19:40:33.082131  437269 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1014 19:40:33.082135  437269 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1014 19:40:33.082139  437269 command_runner.go:130] > # 	"containers_oom_total",
	I1014 19:40:33.082142  437269 command_runner.go:130] > # 	"processes_defunct",
	I1014 19:40:33.082146  437269 command_runner.go:130] > # 	"operations_total",
	I1014 19:40:33.082150  437269 command_runner.go:130] > # 	"operations_latency_seconds",
	I1014 19:40:33.082154  437269 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1014 19:40:33.082157  437269 command_runner.go:130] > # 	"operations_errors_total",
	I1014 19:40:33.082162  437269 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1014 19:40:33.082169  437269 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1014 19:40:33.082173  437269 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1014 19:40:33.082178  437269 command_runner.go:130] > # 	"image_pulls_success_total",
	I1014 19:40:33.082182  437269 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1014 19:40:33.082188  437269 command_runner.go:130] > # 	"containers_oom_count_total",
	I1014 19:40:33.082193  437269 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1014 19:40:33.082199  437269 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1014 19:40:33.082203  437269 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1014 19:40:33.082208  437269 command_runner.go:130] > # ]
	I1014 19:40:33.082214  437269 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1014 19:40:33.082219  437269 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1014 19:40:33.082224  437269 command_runner.go:130] > # The port on which the metrics server will listen.
	I1014 19:40:33.082227  437269 command_runner.go:130] > # metrics_port = 9090
	I1014 19:40:33.082234  437269 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1014 19:40:33.082238  437269 command_runner.go:130] > # metrics_socket = ""
	I1014 19:40:33.082245  437269 command_runner.go:130] > # The certificate for the secure metrics server.
	I1014 19:40:33.082250  437269 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1014 19:40:33.082258  437269 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1014 19:40:33.082263  437269 command_runner.go:130] > # certificate on any modification event.
	I1014 19:40:33.082269  437269 command_runner.go:130] > # metrics_cert = ""
	I1014 19:40:33.082274  437269 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1014 19:40:33.082280  437269 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1014 19:40:33.082284  437269 command_runner.go:130] > # metrics_key = ""
	I1014 19:40:33.082292  437269 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1014 19:40:33.082295  437269 command_runner.go:130] > [crio.tracing]
	I1014 19:40:33.082300  437269 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1014 19:40:33.082306  437269 command_runner.go:130] > # enable_tracing = false
	I1014 19:40:33.082311  437269 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1014 19:40:33.082317  437269 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1014 19:40:33.082324  437269 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1014 19:40:33.082330  437269 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1014 19:40:33.082334  437269 command_runner.go:130] > # CRI-O NRI configuration.
	I1014 19:40:33.082340  437269 command_runner.go:130] > [crio.nri]
	I1014 19:40:33.082345  437269 command_runner.go:130] > # Globally enable or disable NRI.
	I1014 19:40:33.082350  437269 command_runner.go:130] > # enable_nri = true
	I1014 19:40:33.082354  437269 command_runner.go:130] > # NRI socket to listen on.
	I1014 19:40:33.082361  437269 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1014 19:40:33.082365  437269 command_runner.go:130] > # NRI plugin directory to use.
	I1014 19:40:33.082372  437269 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1014 19:40:33.082376  437269 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1014 19:40:33.082383  437269 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1014 19:40:33.082388  437269 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1014 19:40:33.082423  437269 command_runner.go:130] > # nri_disable_connections = false
	I1014 19:40:33.082431  437269 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1014 19:40:33.082435  437269 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1014 19:40:33.082440  437269 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1014 19:40:33.082444  437269 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1014 19:40:33.082451  437269 command_runner.go:130] > # NRI default validator configuration.
	I1014 19:40:33.082457  437269 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1014 19:40:33.082466  437269 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1014 19:40:33.082472  437269 command_runner.go:130] > # can be restricted/rejected:
	I1014 19:40:33.082476  437269 command_runner.go:130] > # - OCI hook injection
	I1014 19:40:33.082483  437269 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1014 19:40:33.082487  437269 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1014 19:40:33.082494  437269 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1014 19:40:33.082498  437269 command_runner.go:130] > # - adjustment of linux namespaces
	I1014 19:40:33.082506  437269 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1014 19:40:33.082514  437269 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1014 19:40:33.082519  437269 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1014 19:40:33.082524  437269 command_runner.go:130] > #
	I1014 19:40:33.082528  437269 command_runner.go:130] > # [crio.nri.default_validator]
	I1014 19:40:33.082535  437269 command_runner.go:130] > # nri_enable_default_validator = false
	I1014 19:40:33.082539  437269 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1014 19:40:33.082546  437269 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1014 19:40:33.082551  437269 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1014 19:40:33.082559  437269 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1014 19:40:33.082564  437269 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1014 19:40:33.082570  437269 command_runner.go:130] > # nri_validator_required_plugins = [
	I1014 19:40:33.082573  437269 command_runner.go:130] > # ]
	I1014 19:40:33.082582  437269 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1014 19:40:33.082587  437269 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1014 19:40:33.082593  437269 command_runner.go:130] > [crio.stats]
	I1014 19:40:33.082598  437269 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1014 19:40:33.082608  437269 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1014 19:40:33.082614  437269 command_runner.go:130] > # stats_collection_period = 0
	I1014 19:40:33.082619  437269 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1014 19:40:33.082628  437269 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1014 19:40:33.082631  437269 command_runner.go:130] > # collection_period = 0
	I1014 19:40:33.082741  437269 cni.go:84] Creating CNI manager for ""
	I1014 19:40:33.082769  437269 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1014 19:40:33.082789  437269 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1014 19:40:33.082811  437269 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-744288 NodeName:functional-744288 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1014 19:40:33.082940  437269 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-744288"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 19:40:33.083002  437269 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1014 19:40:33.091321  437269 command_runner.go:130] > kubeadm
	I1014 19:40:33.091339  437269 command_runner.go:130] > kubectl
	I1014 19:40:33.091351  437269 command_runner.go:130] > kubelet
	I1014 19:40:33.091376  437269 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 19:40:33.091429  437269 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1014 19:40:33.099086  437269 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1014 19:40:33.111962  437269 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 19:40:33.125422  437269 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1014 19:40:33.138383  437269 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1014 19:40:33.142436  437269 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1014 19:40:33.142515  437269 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 19:40:33.229714  437269 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 19:40:33.242948  437269 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288 for IP: 192.168.49.2
	I1014 19:40:33.242967  437269 certs.go:195] generating shared ca certs ...
	I1014 19:40:33.242983  437269 certs.go:227] acquiring lock for ca certs: {Name:mk332cc83f1e1747534fc81e896bed94f24941d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 19:40:33.243111  437269 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.key
	I1014 19:40:33.243147  437269 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.key
	I1014 19:40:33.243157  437269 certs.go:257] generating profile certs ...
	I1014 19:40:33.243244  437269 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/client.key
	I1014 19:40:33.243295  437269 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/apiserver.key.d065d9e2
	I1014 19:40:33.243331  437269 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/proxy-client.key
	I1014 19:40:33.243342  437269 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1014 19:40:33.243354  437269 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1014 19:40:33.243366  437269 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1014 19:40:33.243378  437269 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1014 19:40:33.243389  437269 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1014 19:40:33.243402  437269 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1014 19:40:33.243414  437269 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1014 19:40:33.243426  437269 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1014 19:40:33.243468  437269 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/417373.pem (1338 bytes)
	W1014 19:40:33.243499  437269 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-413763/.minikube/certs/417373_empty.pem, impossibly tiny 0 bytes
	I1014 19:40:33.243509  437269 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca-key.pem (1675 bytes)
	I1014 19:40:33.243528  437269 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem (1078 bytes)
	I1014 19:40:33.243550  437269 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem (1123 bytes)
	I1014 19:40:33.243570  437269 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem (1675 bytes)
	I1014 19:40:33.243605  437269 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem (1708 bytes)
	I1014 19:40:33.243631  437269 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1014 19:40:33.243646  437269 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/417373.pem -> /usr/share/ca-certificates/417373.pem
	I1014 19:40:33.243657  437269 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem -> /usr/share/ca-certificates/4173732.pem
	I1014 19:40:33.244241  437269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 19:40:33.262628  437269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1014 19:40:33.280949  437269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 19:40:33.299645  437269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1014 19:40:33.318581  437269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1014 19:40:33.336772  437269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1014 19:40:33.354893  437269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 19:40:33.372224  437269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1014 19:40:33.389816  437269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 19:40:33.407785  437269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/certs/417373.pem --> /usr/share/ca-certificates/417373.pem (1338 bytes)
	I1014 19:40:33.425006  437269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem --> /usr/share/ca-certificates/4173732.pem (1708 bytes)
	I1014 19:40:33.442414  437269 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 19:40:33.455418  437269 ssh_runner.go:195] Run: openssl version
	I1014 19:40:33.461786  437269 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1014 19:40:33.461878  437269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/417373.pem && ln -fs /usr/share/ca-certificates/417373.pem /etc/ssl/certs/417373.pem"
	I1014 19:40:33.470707  437269 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/417373.pem
	I1014 19:40:33.474930  437269 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct 14 19:32 /usr/share/ca-certificates/417373.pem
	I1014 19:40:33.474991  437269 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 19:32 /usr/share/ca-certificates/417373.pem
	I1014 19:40:33.475040  437269 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/417373.pem
	I1014 19:40:33.510084  437269 command_runner.go:130] > 51391683
	I1014 19:40:33.510386  437269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/417373.pem /etc/ssl/certs/51391683.0"
	I1014 19:40:33.519147  437269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4173732.pem && ln -fs /usr/share/ca-certificates/4173732.pem /etc/ssl/certs/4173732.pem"
	I1014 19:40:33.528110  437269 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4173732.pem
	I1014 19:40:33.532126  437269 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct 14 19:32 /usr/share/ca-certificates/4173732.pem
	I1014 19:40:33.532195  437269 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 19:32 /usr/share/ca-certificates/4173732.pem
	I1014 19:40:33.532237  437269 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4173732.pem
	I1014 19:40:33.566452  437269 command_runner.go:130] > 3ec20f2e
	I1014 19:40:33.566529  437269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4173732.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 19:40:33.575059  437269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 19:40:33.583998  437269 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 19:40:33.587961  437269 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct 14 19:15 /usr/share/ca-certificates/minikubeCA.pem
	I1014 19:40:33.588033  437269 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 19:15 /usr/share/ca-certificates/minikubeCA.pem
	I1014 19:40:33.588081  437269 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 19:40:33.622398  437269 command_runner.go:130] > b5213941
	I1014 19:40:33.622796  437269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 19:40:33.631371  437269 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 19:40:33.635295  437269 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 19:40:33.635320  437269 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1014 19:40:33.635326  437269 command_runner.go:130] > Device: 8,1	Inode: 573968      Links: 1
	I1014 19:40:33.635332  437269 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1014 19:40:33.635341  437269 command_runner.go:130] > Access: 2025-10-14 19:36:24.950222095 +0000
	I1014 19:40:33.635346  437269 command_runner.go:130] > Modify: 2025-10-14 19:32:20.041123235 +0000
	I1014 19:40:33.635350  437269 command_runner.go:130] > Change: 2025-10-14 19:32:20.041123235 +0000
	I1014 19:40:33.635355  437269 command_runner.go:130] >  Birth: 2025-10-14 19:32:20.041123235 +0000
	I1014 19:40:33.635409  437269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1014 19:40:33.669731  437269 command_runner.go:130] > Certificate will not expire
	I1014 19:40:33.670080  437269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1014 19:40:33.705048  437269 command_runner.go:130] > Certificate will not expire
	I1014 19:40:33.705140  437269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1014 19:40:33.739547  437269 command_runner.go:130] > Certificate will not expire
	I1014 19:40:33.739632  437269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1014 19:40:33.774590  437269 command_runner.go:130] > Certificate will not expire
	I1014 19:40:33.774998  437269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1014 19:40:33.810800  437269 command_runner.go:130] > Certificate will not expire
	I1014 19:40:33.810892  437269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1014 19:40:33.846191  437269 command_runner.go:130] > Certificate will not expire
	I1014 19:40:33.846525  437269 kubeadm.go:400] StartCluster: {Name:functional-744288 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-744288 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 19:40:33.846626  437269 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 19:40:33.846701  437269 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 19:40:33.876026  437269 cri.go:89] found id: ""
	I1014 19:40:33.876095  437269 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 19:40:33.883772  437269 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1014 19:40:33.883800  437269 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1014 19:40:33.883806  437269 command_runner.go:130] > /var/lib/minikube/etcd:
	I1014 19:40:33.884383  437269 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1014 19:40:33.884404  437269 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1014 19:40:33.884457  437269 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1014 19:40:33.892144  437269 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1014 19:40:33.892232  437269 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-744288" does not appear in /home/jenkins/minikube-integration/21409-413763/kubeconfig
	I1014 19:40:33.892262  437269 kubeconfig.go:62] /home/jenkins/minikube-integration/21409-413763/kubeconfig needs updating (will repair): [kubeconfig missing "functional-744288" cluster setting kubeconfig missing "functional-744288" context setting]
	I1014 19:40:33.892554  437269 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/kubeconfig: {Name:mk8f52e50cf00a1f90024c40a36e49753857e33b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 19:40:33.893171  437269 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21409-413763/kubeconfig
	I1014 19:40:33.893322  437269 kapi.go:59] client config for functional-744288: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/client.crt", KeyFile:"/home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/client.key", CAFile:"/home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1014 19:40:33.893776  437269 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1014 19:40:33.893798  437269 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1014 19:40:33.893803  437269 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1014 19:40:33.893807  437269 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1014 19:40:33.893810  437269 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1014 19:40:33.893821  437269 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1014 19:40:33.894261  437269 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1014 19:40:33.902475  437269 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1014 19:40:33.902513  437269 kubeadm.go:601] duration metric: took 18.102158ms to restartPrimaryControlPlane
	I1014 19:40:33.902527  437269 kubeadm.go:402] duration metric: took 56.015342ms to StartCluster
	I1014 19:40:33.902549  437269 settings.go:142] acquiring lock: {Name:mke99e63954bc0385c76f9fa1a80091fa7740a6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 19:40:33.902670  437269 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21409-413763/kubeconfig
	I1014 19:40:33.903326  437269 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/kubeconfig: {Name:mk8f52e50cf00a1f90024c40a36e49753857e33b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 19:40:33.903559  437269 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 19:40:33.903636  437269 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1014 19:40:33.903763  437269 addons.go:69] Setting storage-provisioner=true in profile "functional-744288"
	I1014 19:40:33.903782  437269 config.go:182] Loaded profile config "functional-744288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 19:40:33.903793  437269 addons.go:69] Setting default-storageclass=true in profile "functional-744288"
	I1014 19:40:33.903791  437269 addons.go:238] Setting addon storage-provisioner=true in "functional-744288"
	I1014 19:40:33.903828  437269 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-744288"
	I1014 19:40:33.903863  437269 host.go:66] Checking if "functional-744288" exists ...
	I1014 19:40:33.904105  437269 cli_runner.go:164] Run: docker container inspect functional-744288 --format={{.State.Status}}
	I1014 19:40:33.904258  437269 cli_runner.go:164] Run: docker container inspect functional-744288 --format={{.State.Status}}
	I1014 19:40:33.906507  437269 out.go:179] * Verifying Kubernetes components...
	I1014 19:40:33.907562  437269 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 19:40:33.925699  437269 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21409-413763/kubeconfig
	I1014 19:40:33.925934  437269 kapi.go:59] client config for functional-744288: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/client.crt", KeyFile:"/home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/client.key", CAFile:"/home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1014 19:40:33.926358  437269 addons.go:238] Setting addon default-storageclass=true in "functional-744288"
	I1014 19:40:33.926409  437269 host.go:66] Checking if "functional-744288" exists ...
	I1014 19:40:33.926937  437269 cli_runner.go:164] Run: docker container inspect functional-744288 --format={{.State.Status}}
	I1014 19:40:33.928366  437269 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 19:40:33.930195  437269 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 19:40:33.930216  437269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1014 19:40:33.930272  437269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-744288
	I1014 19:40:33.952215  437269 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1014 19:40:33.952244  437269 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1014 19:40:33.952310  437269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-744288
	I1014 19:40:33.956857  437269 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/functional-744288/id_rsa Username:docker}
	I1014 19:40:33.971706  437269 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/functional-744288/id_rsa Username:docker}
	I1014 19:40:34.006948  437269 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 19:40:34.021044  437269 node_ready.go:35] waiting up to 6m0s for node "functional-744288" to be "Ready" ...
	I1014 19:40:34.021181  437269 type.go:168] "Request Body" body=""
	I1014 19:40:34.021246  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:34.021571  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:34.069169  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 19:40:34.082461  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1014 19:40:34.132558  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:40:34.132646  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:34.132686  437269 retry.go:31] will retry after 329.296623ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:34.141809  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:40:34.144515  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:34.144547  437269 retry.go:31] will retry after 261.501781ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:34.407171  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1014 19:40:34.461386  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:40:34.461450  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:34.461492  437269 retry.go:31] will retry after 293.495478ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:34.462464  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 19:40:34.513733  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:40:34.516544  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:34.516582  437269 retry.go:31] will retry after 480.429339ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:34.521783  437269 type.go:168] "Request Body" body=""
	I1014 19:40:34.521866  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:34.522176  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:34.755667  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1014 19:40:34.810676  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:40:34.810724  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:34.810744  437269 retry.go:31] will retry after 614.479011ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:34.998090  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 19:40:35.021962  437269 type.go:168] "Request Body" body=""
	I1014 19:40:35.022038  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:35.022373  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:35.049799  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:40:35.052676  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:35.052709  437269 retry.go:31] will retry after 432.01436ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:35.426352  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1014 19:40:35.482403  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:40:35.482455  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:35.482485  437269 retry.go:31] will retry after 1.057612851s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:35.485602  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 19:40:35.522076  437269 type.go:168] "Request Body" body=""
	I1014 19:40:35.522160  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:35.522499  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:35.537729  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:40:35.540612  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:35.540651  437269 retry.go:31] will retry after 1.151923723s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:36.021224  437269 type.go:168] "Request Body" body=""
	I1014 19:40:36.021306  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:36.021677  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:40:36.021751  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:40:36.521540  437269 type.go:168] "Request Body" body=""
	I1014 19:40:36.521648  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:36.522005  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:36.541250  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1014 19:40:36.596277  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:40:36.596343  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:36.596366  437269 retry.go:31] will retry after 858.341252ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:36.693590  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 19:40:36.746070  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:40:36.749114  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:36.749145  437269 retry.go:31] will retry after 1.225575657s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:37.021547  437269 type.go:168] "Request Body" body=""
	I1014 19:40:37.021641  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:37.022054  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:37.455821  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1014 19:40:37.511587  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:40:37.511647  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:37.511676  437269 retry.go:31] will retry after 1.002490371s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:37.521830  437269 type.go:168] "Request Body" body=""
	I1014 19:40:37.521912  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:37.522269  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:37.974939  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 19:40:38.021626  437269 type.go:168] "Request Body" body=""
	I1014 19:40:38.021748  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:38.022116  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:40:38.022184  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:40:38.027734  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:40:38.030470  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:38.030507  437269 retry.go:31] will retry after 1.025461199s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:38.515193  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1014 19:40:38.521814  437269 type.go:168] "Request Body" body=""
	I1014 19:40:38.521914  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:38.522290  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:38.567735  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:40:38.570434  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:38.570473  437269 retry.go:31] will retry after 1.83061983s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:39.022158  437269 type.go:168] "Request Body" body=""
	I1014 19:40:39.022254  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:39.022656  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:39.056879  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 19:40:39.109896  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:40:39.112847  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:39.112884  437269 retry.go:31] will retry after 3.104822489s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:39.521355  437269 type.go:168] "Request Body" body=""
	I1014 19:40:39.521439  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:39.521856  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:40.021689  437269 type.go:168] "Request Body" body=""
	I1014 19:40:40.021785  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:40.022244  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:40:40.022320  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:40:40.401833  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1014 19:40:40.453343  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:40:40.456347  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:40.456387  437269 retry.go:31] will retry after 3.646877865s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:40.521651  437269 type.go:168] "Request Body" body=""
	I1014 19:40:40.521728  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:40.522111  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:41.021801  437269 type.go:168] "Request Body" body=""
	I1014 19:40:41.021897  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:41.022239  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:41.521918  437269 type.go:168] "Request Body" body=""
	I1014 19:40:41.522016  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:41.522380  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:42.022132  437269 type.go:168] "Request Body" body=""
	I1014 19:40:42.022218  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:42.022586  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:40:42.022649  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:40:42.217895  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 19:40:42.273119  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:40:42.273178  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:42.273199  437269 retry.go:31] will retry after 5.13792128s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:42.521564  437269 type.go:168] "Request Body" body=""
	I1014 19:40:42.521722  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:42.522122  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:43.022026  437269 type.go:168] "Request Body" body=""
	I1014 19:40:43.022112  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:43.022464  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:43.521291  437269 type.go:168] "Request Body" body=""
	I1014 19:40:43.521385  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:43.521849  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:44.021813  437269 type.go:168] "Request Body" body=""
	I1014 19:40:44.021907  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:44.022272  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:44.103502  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1014 19:40:44.156724  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:40:44.159470  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:44.159502  437269 retry.go:31] will retry after 6.372961743s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:44.522197  437269 type.go:168] "Request Body" body=""
	I1014 19:40:44.522316  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:44.522799  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:40:44.522878  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:40:45.021683  437269 type.go:168] "Request Body" body=""
	I1014 19:40:45.021776  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:45.022120  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:45.521709  437269 type.go:168] "Request Body" body=""
	I1014 19:40:45.521833  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:45.522209  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:46.021967  437269 type.go:168] "Request Body" body=""
	I1014 19:40:46.022064  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:46.022441  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:46.522085  437269 type.go:168] "Request Body" body=""
	I1014 19:40:46.522181  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:46.522556  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:47.022210  437269 type.go:168] "Request Body" body=""
	I1014 19:40:47.022296  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:47.022645  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:40:47.022716  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:40:47.412207  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 19:40:47.466705  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:40:47.466772  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:47.466800  437269 retry.go:31] will retry after 6.31356698s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:47.521972  437269 type.go:168] "Request Body" body=""
	I1014 19:40:47.522061  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:47.522426  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:48.022131  437269 type.go:168] "Request Body" body=""
	I1014 19:40:48.022208  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:48.022593  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:48.522267  437269 type.go:168] "Request Body" body=""
	I1014 19:40:48.522351  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:48.522727  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:49.021317  437269 type.go:168] "Request Body" body=""
	I1014 19:40:49.021410  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:49.021831  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:49.521375  437269 type.go:168] "Request Body" body=""
	I1014 19:40:49.521474  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:49.521884  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:40:49.521959  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:40:50.021803  437269 type.go:168] "Request Body" body=""
	I1014 19:40:50.021896  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:50.022319  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:50.521972  437269 type.go:168] "Request Body" body=""
	I1014 19:40:50.522068  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:50.522461  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:50.533648  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1014 19:40:50.590568  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:40:50.590621  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:50.590649  437269 retry.go:31] will retry after 8.10133009s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:51.022238  437269 type.go:168] "Request Body" body=""
	I1014 19:40:51.022324  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:51.022671  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:51.521259  437269 type.go:168] "Request Body" body=""
	I1014 19:40:51.521354  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:51.521737  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:52.021339  437269 type.go:168] "Request Body" body=""
	I1014 19:40:52.021436  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:52.021838  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:40:52.021911  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:40:52.521431  437269 type.go:168] "Request Body" body=""
	I1014 19:40:52.521523  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:52.521914  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:53.021515  437269 type.go:168] "Request Body" body=""
	I1014 19:40:53.021632  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:53.022015  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:53.521582  437269 type.go:168] "Request Body" body=""
	I1014 19:40:53.521689  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:53.522061  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:53.781554  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 19:40:53.838039  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:40:53.838101  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:53.838128  437269 retry.go:31] will retry after 9.837531091s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:54.021666  437269 type.go:168] "Request Body" body=""
	I1014 19:40:54.021771  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:54.022166  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:40:54.022235  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:40:54.521778  437269 type.go:168] "Request Body" body=""
	I1014 19:40:54.521864  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:54.522222  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:55.022074  437269 type.go:168] "Request Body" body=""
	I1014 19:40:55.022163  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:55.022522  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:55.522140  437269 type.go:168] "Request Body" body=""
	I1014 19:40:55.522219  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:55.522653  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:56.021265  437269 type.go:168] "Request Body" body=""
	I1014 19:40:56.021344  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:56.021726  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:56.521342  437269 type.go:168] "Request Body" body=""
	I1014 19:40:56.521439  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:56.521872  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:40:56.521945  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:40:57.021424  437269 type.go:168] "Request Body" body=""
	I1014 19:40:57.021552  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:57.021974  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:57.521651  437269 type.go:168] "Request Body" body=""
	I1014 19:40:57.521797  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:57.522216  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:58.021903  437269 type.go:168] "Request Body" body=""
	I1014 19:40:58.021987  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:58.022398  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:58.522085  437269 type.go:168] "Request Body" body=""
	I1014 19:40:58.522169  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:58.522556  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:40:58.522630  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:40:58.692921  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1014 19:40:58.746193  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:40:58.749262  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:58.749295  437269 retry.go:31] will retry after 17.735335575s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:59.021769  437269 type.go:168] "Request Body" body=""
	I1014 19:40:59.021862  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:59.022229  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:59.521888  437269 type.go:168] "Request Body" body=""
	I1014 19:40:59.522001  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:59.522349  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:00.021702  437269 type.go:168] "Request Body" body=""
	I1014 19:41:00.021801  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:00.022202  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:00.522173  437269 type.go:168] "Request Body" body=""
	I1014 19:41:00.522273  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:00.522632  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:00.522721  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:01.021455  437269 type.go:168] "Request Body" body=""
	I1014 19:41:01.021548  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:01.021937  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:01.521738  437269 type.go:168] "Request Body" body=""
	I1014 19:41:01.521858  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:01.522279  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:02.022194  437269 type.go:168] "Request Body" body=""
	I1014 19:41:02.022289  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:02.022725  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:02.521517  437269 type.go:168] "Request Body" body=""
	I1014 19:41:02.521656  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:02.522050  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:03.021919  437269 type.go:168] "Request Body" body=""
	I1014 19:41:03.022009  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:03.022403  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:03.022475  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:03.522212  437269 type.go:168] "Request Body" body=""
	I1014 19:41:03.522291  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:03.522659  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:03.675962  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 19:41:03.727887  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:41:03.730521  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:41:03.730562  437269 retry.go:31] will retry after 19.438885547s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:41:04.022253  437269 type.go:168] "Request Body" body=""
	I1014 19:41:04.022379  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:04.022809  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:04.521663  437269 type.go:168] "Request Body" body=""
	I1014 19:41:04.521794  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:04.522180  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:05.021978  437269 type.go:168] "Request Body" body=""
	I1014 19:41:05.022063  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:05.022412  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:05.522231  437269 type.go:168] "Request Body" body=""
	I1014 19:41:05.522314  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:05.522655  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:05.522732  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:06.021349  437269 type.go:168] "Request Body" body=""
	I1014 19:41:06.021429  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:06.021828  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:06.521569  437269 type.go:168] "Request Body" body=""
	I1014 19:41:06.521651  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:06.522040  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:07.021907  437269 type.go:168] "Request Body" body=""
	I1014 19:41:07.021993  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:07.022361  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:07.522243  437269 type.go:168] "Request Body" body=""
	I1014 19:41:07.522333  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:07.522720  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:07.522816  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:08.021308  437269 type.go:168] "Request Body" body=""
	I1014 19:41:08.021386  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:08.021750  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:08.521638  437269 type.go:168] "Request Body" body=""
	I1014 19:41:08.521722  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:08.522125  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:09.021981  437269 type.go:168] "Request Body" body=""
	I1014 19:41:09.022069  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:09.022464  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:09.521240  437269 type.go:168] "Request Body" body=""
	I1014 19:41:09.521389  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:09.521793  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:10.021609  437269 type.go:168] "Request Body" body=""
	I1014 19:41:10.021695  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:10.022108  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:10.022177  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:10.522050  437269 type.go:168] "Request Body" body=""
	I1014 19:41:10.522140  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:10.522549  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:11.021354  437269 type.go:168] "Request Body" body=""
	I1014 19:41:11.021435  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:11.021862  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:11.521641  437269 type.go:168] "Request Body" body=""
	I1014 19:41:11.521740  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:11.522168  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:12.022028  437269 type.go:168] "Request Body" body=""
	I1014 19:41:12.022114  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:12.022483  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:12.022549  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:12.521254  437269 type.go:168] "Request Body" body=""
	I1014 19:41:12.521342  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:12.521740  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:13.021557  437269 type.go:168] "Request Body" body=""
	I1014 19:41:13.021642  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:13.022039  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:13.521864  437269 type.go:168] "Request Body" body=""
	I1014 19:41:13.521953  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:13.522323  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:14.022194  437269 type.go:168] "Request Body" body=""
	I1014 19:41:14.022287  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:14.022654  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:14.022724  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:14.521434  437269 type.go:168] "Request Body" body=""
	I1014 19:41:14.521526  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:14.521992  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:15.021751  437269 type.go:168] "Request Body" body=""
	I1014 19:41:15.021849  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:15.022211  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:15.522050  437269 type.go:168] "Request Body" body=""
	I1014 19:41:15.522133  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:15.522522  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:16.021287  437269 type.go:168] "Request Body" body=""
	I1014 19:41:16.021373  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:16.021781  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:16.485413  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1014 19:41:16.522201  437269 type.go:168] "Request Body" body=""
	I1014 19:41:16.522279  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:16.522623  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:16.522694  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:16.537285  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:41:16.540211  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:41:16.540239  437269 retry.go:31] will retry after 23.522391633s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:41:17.021909  437269 type.go:168] "Request Body" body=""
	I1014 19:41:17.022015  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:17.022407  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:17.522283  437269 type.go:168] "Request Body" body=""
	I1014 19:41:17.522380  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:17.522743  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:18.021576  437269 type.go:168] "Request Body" body=""
	I1014 19:41:18.021671  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:18.022118  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:18.522003  437269 type.go:168] "Request Body" body=""
	I1014 19:41:18.522089  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:18.522516  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:19.021291  437269 type.go:168] "Request Body" body=""
	I1014 19:41:19.021372  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:19.021747  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:19.021855  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:19.521591  437269 type.go:168] "Request Body" body=""
	I1014 19:41:19.521674  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:19.522078  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:20.021898  437269 type.go:168] "Request Body" body=""
	I1014 19:41:20.021987  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:20.022480  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:20.521321  437269 type.go:168] "Request Body" body=""
	I1014 19:41:20.521403  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:20.521841  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:21.021619  437269 type.go:168] "Request Body" body=""
	I1014 19:41:21.021721  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:21.022173  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:21.022242  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:21.522084  437269 type.go:168] "Request Body" body=""
	I1014 19:41:21.522176  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:21.522550  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:22.021344  437269 type.go:168] "Request Body" body=""
	I1014 19:41:22.021423  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:22.021877  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:22.521680  437269 type.go:168] "Request Body" body=""
	I1014 19:41:22.521784  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:22.522158  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:23.022009  437269 type.go:168] "Request Body" body=""
	I1014 19:41:23.022088  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:23.022489  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:23.022557  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:23.169796  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 19:41:23.227015  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:41:23.227096  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:41:23.227121  437269 retry.go:31] will retry after 24.705053737s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:41:23.521443  437269 type.go:168] "Request Body" body=""
	I1014 19:41:23.521533  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:23.522057  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:24.021980  437269 type.go:168] "Request Body" body=""
	I1014 19:41:24.022087  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:24.022457  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:24.522136  437269 type.go:168] "Request Body" body=""
	I1014 19:41:24.522235  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:24.522578  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:25.021598  437269 type.go:168] "Request Body" body=""
	I1014 19:41:25.021741  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:25.022116  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:25.521746  437269 type.go:168] "Request Body" body=""
	I1014 19:41:25.521865  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:25.522300  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:25.522363  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:26.021980  437269 type.go:168] "Request Body" body=""
	I1014 19:41:26.022056  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:26.022462  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:26.522116  437269 type.go:168] "Request Body" body=""
	I1014 19:41:26.522205  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:26.522581  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:27.022289  437269 type.go:168] "Request Body" body=""
	I1014 19:41:27.022379  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:27.022735  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:27.521368  437269 type.go:168] "Request Body" body=""
	I1014 19:41:27.521454  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:27.521879  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:28.021445  437269 type.go:168] "Request Body" body=""
	I1014 19:41:28.021545  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:28.021931  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:28.021996  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:28.521541  437269 type.go:168] "Request Body" body=""
	I1014 19:41:28.521630  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:28.522060  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:29.021664  437269 type.go:168] "Request Body" body=""
	I1014 19:41:29.021774  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:29.022227  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:29.521894  437269 type.go:168] "Request Body" body=""
	I1014 19:41:29.521983  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:29.522351  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:30.022245  437269 type.go:168] "Request Body" body=""
	I1014 19:41:30.022327  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:30.022707  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:30.022824  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:30.521424  437269 type.go:168] "Request Body" body=""
	I1014 19:41:30.521529  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:30.521982  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:31.021342  437269 type.go:168] "Request Body" body=""
	I1014 19:41:31.021429  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:31.021899  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:31.521503  437269 type.go:168] "Request Body" body=""
	I1014 19:41:31.521595  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:31.522014  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:32.021616  437269 type.go:168] "Request Body" body=""
	I1014 19:41:32.021705  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:32.022095  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:32.521679  437269 type.go:168] "Request Body" body=""
	I1014 19:41:32.521783  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:32.522156  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:32.522231  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:33.021778  437269 type.go:168] "Request Body" body=""
	I1014 19:41:33.021859  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:33.022214  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:33.521935  437269 type.go:168] "Request Body" body=""
	I1014 19:41:33.522024  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:33.522446  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:34.021233  437269 type.go:168] "Request Body" body=""
	I1014 19:41:34.021316  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:34.021702  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:34.521364  437269 type.go:168] "Request Body" body=""
	I1014 19:41:34.521444  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:34.521880  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:35.021696  437269 type.go:168] "Request Body" body=""
	I1014 19:41:35.021799  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:35.022177  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:35.022244  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:35.521929  437269 type.go:168] "Request Body" body=""
	I1014 19:41:35.522017  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:35.522385  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:36.022241  437269 type.go:168] "Request Body" body=""
	I1014 19:41:36.022330  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:36.022808  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:36.521609  437269 type.go:168] "Request Body" body=""
	I1014 19:41:36.521699  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:36.522099  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:37.021877  437269 type.go:168] "Request Body" body=""
	I1014 19:41:37.021957  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:37.022344  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:37.022414  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:37.522189  437269 type.go:168] "Request Body" body=""
	I1014 19:41:37.522279  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:37.522617  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:38.021362  437269 type.go:168] "Request Body" body=""
	I1014 19:41:38.021440  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:38.021855  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:38.521628  437269 type.go:168] "Request Body" body=""
	I1014 19:41:38.521722  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:38.522097  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:39.021917  437269 type.go:168] "Request Body" body=""
	I1014 19:41:39.022012  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:39.022384  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:39.022447  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:39.522314  437269 type.go:168] "Request Body" body=""
	I1014 19:41:39.522401  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:39.522788  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:40.021745  437269 type.go:168] "Request Body" body=""
	I1014 19:41:40.021857  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:40.022236  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:40.063502  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1014 19:41:40.119488  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:41:40.119566  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:41:40.119604  437269 retry.go:31] will retry after 34.554126144s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:41:40.522218  437269 type.go:168] "Request Body" body=""
	I1014 19:41:40.522383  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:40.522878  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:41.021513  437269 type.go:168] "Request Body" body=""
	I1014 19:41:41.021597  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:41.021974  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:41.521785  437269 type.go:168] "Request Body" body=""
	I1014 19:41:41.521864  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:41.522250  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:41.522330  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:42.022203  437269 type.go:168] "Request Body" body=""
	I1014 19:41:42.022322  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:42.022810  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:42.521587  437269 type.go:168] "Request Body" body=""
	I1014 19:41:42.521669  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:42.522059  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:43.021981  437269 type.go:168] "Request Body" body=""
	I1014 19:41:43.022074  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:43.022442  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:43.521224  437269 type.go:168] "Request Body" body=""
	I1014 19:41:43.521304  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:43.521705  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:44.021370  437269 type.go:168] "Request Body" body=""
	I1014 19:41:44.021454  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:44.021888  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:44.021956  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:44.521703  437269 type.go:168] "Request Body" body=""
	I1014 19:41:44.521821  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:44.522229  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:45.022076  437269 type.go:168] "Request Body" body=""
	I1014 19:41:45.022158  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:45.022500  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:45.521283  437269 type.go:168] "Request Body" body=""
	I1014 19:41:45.521372  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:45.521787  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:46.021585  437269 type.go:168] "Request Body" body=""
	I1014 19:41:46.021687  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:46.022067  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:46.022144  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:46.521959  437269 type.go:168] "Request Body" body=""
	I1014 19:41:46.522047  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:46.522400  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:47.022244  437269 type.go:168] "Request Body" body=""
	I1014 19:41:47.022326  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:47.022720  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:47.521502  437269 type.go:168] "Request Body" body=""
	I1014 19:41:47.521586  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:47.521971  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:47.932453  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 19:41:47.984361  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:41:47.987254  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:41:47.987292  437269 retry.go:31] will retry after 37.673790461s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:41:48.021563  437269 type.go:168] "Request Body" body=""
	I1014 19:41:48.021661  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:48.022072  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:48.521661  437269 type.go:168] "Request Body" body=""
	I1014 19:41:48.521746  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:48.522153  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:48.522222  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:49.021778  437269 type.go:168] "Request Body" body=""
	I1014 19:41:49.021869  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:49.022246  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:49.521919  437269 type.go:168] "Request Body" body=""
	I1014 19:41:49.521999  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:49.522366  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:50.021911  437269 type.go:168] "Request Body" body=""
	I1014 19:41:50.021996  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:50.022358  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:50.522021  437269 type.go:168] "Request Body" body=""
	I1014 19:41:50.522121  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:50.522513  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:50.522647  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:51.022257  437269 type.go:168] "Request Body" body=""
	I1014 19:41:51.022355  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:51.022711  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:51.521301  437269 type.go:168] "Request Body" body=""
	I1014 19:41:51.521377  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:51.521820  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:52.021365  437269 type.go:168] "Request Body" body=""
	I1014 19:41:52.021447  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:52.021844  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:52.521373  437269 type.go:168] "Request Body" body=""
	I1014 19:41:52.521451  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:52.521825  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:53.021413  437269 type.go:168] "Request Body" body=""
	I1014 19:41:53.021513  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:53.021940  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:53.022029  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:53.521560  437269 type.go:168] "Request Body" body=""
	I1014 19:41:53.521663  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:53.522072  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:54.021872  437269 type.go:168] "Request Body" body=""
	I1014 19:41:54.021964  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:54.022312  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:54.521983  437269 type.go:168] "Request Body" body=""
	I1014 19:41:54.522067  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:54.522484  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:55.021263  437269 type.go:168] "Request Body" body=""
	I1014 19:41:55.021357  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:55.021747  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:55.521288  437269 type.go:168] "Request Body" body=""
	I1014 19:41:55.521376  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:55.521739  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:55.521840  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:56.021322  437269 type.go:168] "Request Body" body=""
	I1014 19:41:56.021409  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:56.021840  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:56.521370  437269 type.go:168] "Request Body" body=""
	I1014 19:41:56.521452  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:56.521831  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:57.021963  437269 type.go:168] "Request Body" body=""
	I1014 19:41:57.022041  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:57.022397  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:57.522061  437269 type.go:168] "Request Body" body=""
	I1014 19:41:57.522137  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:57.522480  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:57.522553  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:58.022151  437269 type.go:168] "Request Body" body=""
	I1014 19:41:58.022236  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:58.022597  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:58.522240  437269 type.go:168] "Request Body" body=""
	I1014 19:41:58.522322  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:58.522668  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:59.021247  437269 type.go:168] "Request Body" body=""
	I1014 19:41:59.021333  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:59.021717  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:59.521251  437269 type.go:168] "Request Body" body=""
	I1014 19:41:59.521330  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:59.521703  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:00.021653  437269 type.go:168] "Request Body" body=""
	I1014 19:42:00.021752  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:00.022142  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:00.022220  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:00.522036  437269 type.go:168] "Request Body" body=""
	I1014 19:42:00.522123  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:00.522466  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:01.022199  437269 type.go:168] "Request Body" body=""
	I1014 19:42:01.022290  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:01.022633  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:01.521196  437269 type.go:168] "Request Body" body=""
	I1014 19:42:01.521278  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:01.521637  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:02.022258  437269 type.go:168] "Request Body" body=""
	I1014 19:42:02.022335  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:02.022740  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:02.022848  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:02.521321  437269 type.go:168] "Request Body" body=""
	I1014 19:42:02.521405  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:02.521800  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:03.021313  437269 type.go:168] "Request Body" body=""
	I1014 19:42:03.021392  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:03.021749  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:03.521348  437269 type.go:168] "Request Body" body=""
	I1014 19:42:03.521443  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:03.521938  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:04.021944  437269 type.go:168] "Request Body" body=""
	I1014 19:42:04.022035  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:04.022414  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:04.522132  437269 type.go:168] "Request Body" body=""
	I1014 19:42:04.522227  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:04.522582  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:04.522653  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:05.021481  437269 type.go:168] "Request Body" body=""
	I1014 19:42:05.021561  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:05.021905  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:05.521556  437269 type.go:168] "Request Body" body=""
	I1014 19:42:05.521637  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:05.522027  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:06.021613  437269 type.go:168] "Request Body" body=""
	I1014 19:42:06.021699  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:06.022057  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:06.521633  437269 type.go:168] "Request Body" body=""
	I1014 19:42:06.521719  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:06.522075  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:07.021749  437269 type.go:168] "Request Body" body=""
	I1014 19:42:07.021848  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:07.022194  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:07.022260  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:07.521871  437269 type.go:168] "Request Body" body=""
	I1014 19:42:07.521957  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:07.522300  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:08.021955  437269 type.go:168] "Request Body" body=""
	I1014 19:42:08.022031  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:08.022379  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:08.522039  437269 type.go:168] "Request Body" body=""
	I1014 19:42:08.522117  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:08.522476  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:09.022164  437269 type.go:168] "Request Body" body=""
	I1014 19:42:09.022254  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:09.022634  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:09.022701  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:09.521239  437269 type.go:168] "Request Body" body=""
	I1014 19:42:09.521333  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:09.521715  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:10.021732  437269 type.go:168] "Request Body" body=""
	I1014 19:42:10.021859  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:10.022260  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:10.521865  437269 type.go:168] "Request Body" body=""
	I1014 19:42:10.521952  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:10.522296  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:11.021963  437269 type.go:168] "Request Body" body=""
	I1014 19:42:11.022051  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:11.022419  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:11.522129  437269 type.go:168] "Request Body" body=""
	I1014 19:42:11.522219  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:11.522604  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:11.522681  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:12.022256  437269 type.go:168] "Request Body" body=""
	I1014 19:42:12.022343  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:12.022700  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:12.521278  437269 type.go:168] "Request Body" body=""
	I1014 19:42:12.521359  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:12.521732  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:13.022114  437269 type.go:168] "Request Body" body=""
	I1014 19:42:13.022198  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:13.022561  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:13.522240  437269 type.go:168] "Request Body" body=""
	I1014 19:42:13.522319  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:13.522711  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:13.522798  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:14.021579  437269 type.go:168] "Request Body" body=""
	I1014 19:42:14.021707  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:14.022154  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:14.521710  437269 type.go:168] "Request Body" body=""
	I1014 19:42:14.521880  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:14.522225  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:14.674573  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1014 19:42:14.729085  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:42:14.729138  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:42:14.729273  437269 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1014 19:42:15.021737  437269 type.go:168] "Request Body" body=""
	I1014 19:42:15.021834  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:15.022205  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:15.521930  437269 type.go:168] "Request Body" body=""
	I1014 19:42:15.522012  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:15.522372  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:16.022056  437269 type.go:168] "Request Body" body=""
	I1014 19:42:16.022143  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:16.022542  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:16.022609  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:16.522173  437269 type.go:168] "Request Body" body=""
	I1014 19:42:16.522253  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:16.522604  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:17.021294  437269 type.go:168] "Request Body" body=""
	I1014 19:42:17.021370  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:17.021733  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:17.521444  437269 type.go:168] "Request Body" body=""
	I1014 19:42:17.521548  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:17.521910  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:18.022124  437269 type.go:168] "Request Body" body=""
	I1014 19:42:18.022209  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:18.022551  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:18.022636  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:18.522199  437269 type.go:168] "Request Body" body=""
	I1014 19:42:18.522276  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:18.522605  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:19.022258  437269 type.go:168] "Request Body" body=""
	I1014 19:42:19.022337  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:19.022731  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:19.521317  437269 type.go:168] "Request Body" body=""
	I1014 19:42:19.521448  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:19.521836  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:20.021610  437269 type.go:168] "Request Body" body=""
	I1014 19:42:20.021710  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:20.022103  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:20.521709  437269 type.go:168] "Request Body" body=""
	I1014 19:42:20.521810  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:20.522173  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:20.522240  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:21.021782  437269 type.go:168] "Request Body" body=""
	I1014 19:42:21.021881  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:21.022300  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:21.521996  437269 type.go:168] "Request Body" body=""
	I1014 19:42:21.522075  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:21.522493  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:22.022092  437269 type.go:168] "Request Body" body=""
	I1014 19:42:22.022170  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:22.022570  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:22.522183  437269 type.go:168] "Request Body" body=""
	I1014 19:42:22.522272  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:22.522625  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:22.522688  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:23.021971  437269 type.go:168] "Request Body" body=""
	I1014 19:42:23.022063  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:23.022422  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:23.522081  437269 type.go:168] "Request Body" body=""
	I1014 19:42:23.522162  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:23.522509  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:24.022288  437269 type.go:168] "Request Body" body=""
	I1014 19:42:24.022385  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:24.022833  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:24.521351  437269 type.go:168] "Request Body" body=""
	I1014 19:42:24.521424  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:24.521791  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:25.021730  437269 type.go:168] "Request Body" body=""
	I1014 19:42:25.021831  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:25.022212  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:25.022288  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:25.521848  437269 type.go:168] "Request Body" body=""
	I1014 19:42:25.521952  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:25.522288  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:25.661672  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 19:42:25.715017  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:42:25.717809  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:42:25.717938  437269 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1014 19:42:25.719888  437269 out.go:179] * Enabled addons: 
	I1014 19:42:25.722455  437269 addons.go:514] duration metric: took 1m51.818834592s for enable addons: enabled=[]
	I1014 19:42:26.021269  437269 type.go:168] "Request Body" body=""
	I1014 19:42:26.021349  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:26.021816  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:26.521369  437269 type.go:168] "Request Body" body=""
	I1014 19:42:26.521477  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:26.521916  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:27.021507  437269 type.go:168] "Request Body" body=""
	I1014 19:42:27.021605  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:27.021991  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:27.521602  437269 type.go:168] "Request Body" body=""
	I1014 19:42:27.521721  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:27.522084  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:27.522146  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:28.021642  437269 type.go:168] "Request Body" body=""
	I1014 19:42:28.021743  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:28.022116  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:28.521702  437269 type.go:168] "Request Body" body=""
	I1014 19:42:28.521807  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:28.522163  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:29.021797  437269 type.go:168] "Request Body" body=""
	I1014 19:42:29.021903  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:29.022267  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:29.522074  437269 type.go:168] "Request Body" body=""
	I1014 19:42:29.522173  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:29.522553  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:29.522671  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:30.021560  437269 type.go:168] "Request Body" body=""
	I1014 19:42:30.021654  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:30.022115  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:30.521649  437269 type.go:168] "Request Body" body=""
	I1014 19:42:30.521743  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:30.522178  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:31.021725  437269 type.go:168] "Request Body" body=""
	I1014 19:42:31.021826  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:31.022186  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:31.521880  437269 type.go:168] "Request Body" body=""
	I1014 19:42:31.521996  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:31.522379  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:32.021983  437269 type.go:168] "Request Body" body=""
	I1014 19:42:32.022060  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:32.022435  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:32.022510  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:32.522077  437269 type.go:168] "Request Body" body=""
	I1014 19:42:32.522170  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:32.522524  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:33.022165  437269 type.go:168] "Request Body" body=""
	I1014 19:42:33.022248  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:33.022592  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:33.521797  437269 type.go:168] "Request Body" body=""
	I1014 19:42:33.522204  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:33.522657  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:34.021345  437269 type.go:168] "Request Body" body=""
	I1014 19:42:34.021435  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:34.021864  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:34.521442  437269 type.go:168] "Request Body" body=""
	I1014 19:42:34.521536  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:34.521932  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:34.522018  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:35.021950  437269 type.go:168] "Request Body" body=""
	I1014 19:42:35.022028  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:35.022451  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:35.521247  437269 type.go:168] "Request Body" body=""
	I1014 19:42:35.521354  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:35.521837  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:36.021379  437269 type.go:168] "Request Body" body=""
	I1014 19:42:36.021471  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:36.021855  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:36.521476  437269 type.go:168] "Request Body" body=""
	I1014 19:42:36.521569  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:36.521989  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:36.522059  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:37.021550  437269 type.go:168] "Request Body" body=""
	I1014 19:42:37.021627  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:37.022016  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:37.521641  437269 type.go:168] "Request Body" body=""
	I1014 19:42:37.521743  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:37.522187  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:38.021859  437269 type.go:168] "Request Body" body=""
	I1014 19:42:38.021939  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:38.022324  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:38.521989  437269 type.go:168] "Request Body" body=""
	I1014 19:42:38.522080  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:38.522434  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:38.522503  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:39.022081  437269 type.go:168] "Request Body" body=""
	I1014 19:42:39.022165  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:39.022503  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:39.522189  437269 type.go:168] "Request Body" body=""
	I1014 19:42:39.522287  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:39.522650  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:40.021651  437269 type.go:168] "Request Body" body=""
	I1014 19:42:40.021735  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:40.022128  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:40.521658  437269 type.go:168] "Request Body" body=""
	I1014 19:42:40.521778  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:40.522143  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:41.021691  437269 type.go:168] "Request Body" body=""
	I1014 19:42:41.021793  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:41.022157  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:41.022225  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:41.521808  437269 type.go:168] "Request Body" body=""
	I1014 19:42:41.521901  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:41.522267  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:42.021874  437269 type.go:168] "Request Body" body=""
	I1014 19:42:42.021955  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:42.022329  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:42.521975  437269 type.go:168] "Request Body" body=""
	I1014 19:42:42.522059  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:42.522405  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:43.022032  437269 type.go:168] "Request Body" body=""
	I1014 19:42:43.022120  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:43.022486  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:43.022552  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:43.522253  437269 type.go:168] "Request Body" body=""
	I1014 19:42:43.522342  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:43.522709  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:44.021548  437269 type.go:168] "Request Body" body=""
	I1014 19:42:44.021646  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:44.022079  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:44.521677  437269 type.go:168] "Request Body" body=""
	I1014 19:42:44.521784  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:44.522202  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:45.022110  437269 type.go:168] "Request Body" body=""
	I1014 19:42:45.022196  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:45.022558  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:45.022661  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:45.522180  437269 type.go:168] "Request Body" body=""
	I1014 19:42:45.522266  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:45.522677  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:46.021229  437269 type.go:168] "Request Body" body=""
	I1014 19:42:46.021324  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:46.021716  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:46.521270  437269 type.go:168] "Request Body" body=""
	I1014 19:42:46.521348  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:46.521722  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:47.021311  437269 type.go:168] "Request Body" body=""
	I1014 19:42:47.021390  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:47.021779  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:47.521354  437269 type.go:168] "Request Body" body=""
	I1014 19:42:47.521433  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:47.521823  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:47.521900  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:48.021360  437269 type.go:168] "Request Body" body=""
	I1014 19:42:48.021443  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:48.021837  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:48.521366  437269 type.go:168] "Request Body" body=""
	I1014 19:42:48.521469  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:48.521854  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:49.022003  437269 type.go:168] "Request Body" body=""
	I1014 19:42:49.022085  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:49.022428  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:49.522046  437269 type.go:168] "Request Body" body=""
	I1014 19:42:49.522124  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:49.522478  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:49.522562  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:50.021433  437269 type.go:168] "Request Body" body=""
	I1014 19:42:50.021542  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:50.021987  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:50.521590  437269 type.go:168] "Request Body" body=""
	I1014 19:42:50.521671  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:50.521991  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:51.021671  437269 type.go:168] "Request Body" body=""
	I1014 19:42:51.021778  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:51.022149  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:51.521719  437269 type.go:168] "Request Body" body=""
	I1014 19:42:51.521832  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:51.522215  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:52.021893  437269 type.go:168] "Request Body" body=""
	I1014 19:42:52.021987  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:52.022342  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:52.022411  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:52.522080  437269 type.go:168] "Request Body" body=""
	I1014 19:42:52.522183  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:52.522617  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:53.022238  437269 type.go:168] "Request Body" body=""
	I1014 19:42:53.022323  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:53.022716  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:53.521304  437269 type.go:168] "Request Body" body=""
	I1014 19:42:53.521390  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:53.521854  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:54.021685  437269 type.go:168] "Request Body" body=""
	I1014 19:42:54.021789  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:54.022166  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:54.521747  437269 type.go:168] "Request Body" body=""
	I1014 19:42:54.521851  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:54.522275  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:54.522352  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:55.022087  437269 type.go:168] "Request Body" body=""
	I1014 19:42:55.022177  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:55.022557  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:55.522187  437269 type.go:168] "Request Body" body=""
	I1014 19:42:55.522285  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:55.522718  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:56.021281  437269 type.go:168] "Request Body" body=""
	I1014 19:42:56.021383  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:56.021840  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:56.521354  437269 type.go:168] "Request Body" body=""
	I1014 19:42:56.521430  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:56.521815  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:57.021386  437269 type.go:168] "Request Body" body=""
	I1014 19:42:57.021483  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:57.021914  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:57.021999  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:57.521600  437269 type.go:168] "Request Body" body=""
	I1014 19:42:57.521687  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:57.522087  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:58.021700  437269 type.go:168] "Request Body" body=""
	I1014 19:42:58.021799  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:58.022207  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:58.521870  437269 type.go:168] "Request Body" body=""
	I1014 19:42:58.521949  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:58.522303  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:59.021970  437269 type.go:168] "Request Body" body=""
	I1014 19:42:59.022045  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:59.022443  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:59.022507  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:59.522038  437269 type.go:168] "Request Body" body=""
	I1014 19:42:59.522131  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:59.522484  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:00.021506  437269 type.go:168] "Request Body" body=""
	I1014 19:43:00.021597  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:00.021981  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:00.521539  437269 type.go:168] "Request Body" body=""
	I1014 19:43:00.521625  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:00.522005  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:01.021567  437269 type.go:168] "Request Body" body=""
	I1014 19:43:01.021646  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:01.022034  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:01.521607  437269 type.go:168] "Request Body" body=""
	I1014 19:43:01.521699  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:01.522086  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:01.522169  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:02.021674  437269 type.go:168] "Request Body" body=""
	I1014 19:43:02.021771  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:02.022118  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:02.521701  437269 type.go:168] "Request Body" body=""
	I1014 19:43:02.521802  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:02.522123  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:03.021671  437269 type.go:168] "Request Body" body=""
	I1014 19:43:03.021748  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:03.022117  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:03.521807  437269 type.go:168] "Request Body" body=""
	I1014 19:43:03.521898  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:03.522297  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:03.522377  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:04.021247  437269 type.go:168] "Request Body" body=""
	I1014 19:43:04.021333  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:04.021730  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:04.521290  437269 type.go:168] "Request Body" body=""
	I1014 19:43:04.521389  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:04.521814  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:05.021660  437269 type.go:168] "Request Body" body=""
	I1014 19:43:05.021743  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:05.022150  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:05.521749  437269 type.go:168] "Request Body" body=""
	I1014 19:43:05.521888  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:05.522240  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:06.021896  437269 type.go:168] "Request Body" body=""
	I1014 19:43:06.021996  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:06.022415  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:06.022501  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:06.522060  437269 type.go:168] "Request Body" body=""
	I1014 19:43:06.522142  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:06.522496  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:07.022152  437269 type.go:168] "Request Body" body=""
	I1014 19:43:07.022255  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:07.022672  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:07.521243  437269 type.go:168] "Request Body" body=""
	I1014 19:43:07.521325  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:07.521730  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:08.021306  437269 type.go:168] "Request Body" body=""
	I1014 19:43:08.021386  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:08.021797  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:08.521379  437269 type.go:168] "Request Body" body=""
	I1014 19:43:08.521475  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:08.521854  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:08.521921  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:09.021427  437269 type.go:168] "Request Body" body=""
	I1014 19:43:09.021525  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:09.021943  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:09.521610  437269 type.go:168] "Request Body" body=""
	I1014 19:43:09.521709  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:09.522074  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:10.021890  437269 type.go:168] "Request Body" body=""
	I1014 19:43:10.021973  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:10.022317  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:10.522040  437269 type.go:168] "Request Body" body=""
	I1014 19:43:10.522122  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:10.522464  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:10.522545  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:11.021678  437269 type.go:168] "Request Body" body=""
	I1014 19:43:11.021775  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:11.022124  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:11.521786  437269 type.go:168] "Request Body" body=""
	I1014 19:43:11.521865  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:11.522285  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:12.021630  437269 type.go:168] "Request Body" body=""
	I1014 19:43:12.021721  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:12.022083  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:12.521655  437269 type.go:168] "Request Body" body=""
	I1014 19:43:12.521751  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:12.522185  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:13.021857  437269 type.go:168] "Request Body" body=""
	I1014 19:43:13.021947  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:13.022329  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:13.022419  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:13.521998  437269 type.go:168] "Request Body" body=""
	I1014 19:43:13.522076  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:13.522428  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:14.022232  437269 type.go:168] "Request Body" body=""
	I1014 19:43:14.022315  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:14.022692  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:14.521299  437269 type.go:168] "Request Body" body=""
	I1014 19:43:14.521379  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:14.521818  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:15.021769  437269 type.go:168] "Request Body" body=""
	I1014 19:43:15.021869  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:15.022238  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:15.521883  437269 type.go:168] "Request Body" body=""
	I1014 19:43:15.521969  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:15.522302  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:15.522372  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:16.021990  437269 type.go:168] "Request Body" body=""
	I1014 19:43:16.022071  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:16.022459  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:16.522107  437269 type.go:168] "Request Body" body=""
	I1014 19:43:16.522190  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:16.522527  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:17.022255  437269 type.go:168] "Request Body" body=""
	I1014 19:43:17.022335  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:17.022728  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:17.521281  437269 type.go:168] "Request Body" body=""
	I1014 19:43:17.521369  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:17.521726  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:18.021392  437269 type.go:168] "Request Body" body=""
	I1014 19:43:18.021485  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:18.021932  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:18.022012  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:18.521618  437269 type.go:168] "Request Body" body=""
	I1014 19:43:18.521708  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:18.522112  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:19.021718  437269 type.go:168] "Request Body" body=""
	I1014 19:43:19.021829  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:19.022200  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:19.521926  437269 type.go:168] "Request Body" body=""
	I1014 19:43:19.522009  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:19.522391  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:20.021218  437269 type.go:168] "Request Body" body=""
	I1014 19:43:20.021308  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:20.021706  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:20.521306  437269 type.go:168] "Request Body" body=""
	I1014 19:43:20.521386  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:20.521816  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:20.521893  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:21.021342  437269 type.go:168] "Request Body" body=""
	I1014 19:43:21.021427  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:21.021835  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:21.521377  437269 type.go:168] "Request Body" body=""
	I1014 19:43:21.521483  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:21.521876  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:22.021433  437269 type.go:168] "Request Body" body=""
	I1014 19:43:22.021530  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:22.021848  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:22.521448  437269 type.go:168] "Request Body" body=""
	I1014 19:43:22.521550  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:22.521980  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:22.522047  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:23.021566  437269 type.go:168] "Request Body" body=""
	I1014 19:43:23.021671  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:23.022058  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:23.521627  437269 type.go:168] "Request Body" body=""
	I1014 19:43:23.521736  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:23.522126  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:24.022029  437269 type.go:168] "Request Body" body=""
	I1014 19:43:24.022121  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:24.022504  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:24.522205  437269 type.go:168] "Request Body" body=""
	I1014 19:43:24.522294  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:24.522686  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:24.522787  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:25.021717  437269 type.go:168] "Request Body" body=""
	I1014 19:43:25.021820  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:25.022213  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:25.521882  437269 type.go:168] "Request Body" body=""
	I1014 19:43:25.521969  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:25.522345  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:26.021966  437269 type.go:168] "Request Body" body=""
	I1014 19:43:26.022053  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:26.022395  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:26.522078  437269 type.go:168] "Request Body" body=""
	I1014 19:43:26.522167  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:26.522591  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:27.022256  437269 type.go:168] "Request Body" body=""
	I1014 19:43:27.022347  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:27.022787  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:27.022856  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:27.521335  437269 type.go:168] "Request Body" body=""
	I1014 19:43:27.521438  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:27.521885  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:28.021454  437269 type.go:168] "Request Body" body=""
	I1014 19:43:28.021560  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:28.021963  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:28.521548  437269 type.go:168] "Request Body" body=""
	I1014 19:43:28.521631  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:28.522049  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:29.021606  437269 type.go:168] "Request Body" body=""
	I1014 19:43:29.021709  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:29.022129  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:29.521791  437269 type.go:168] "Request Body" body=""
	I1014 19:43:29.521879  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:29.522325  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:29.522390  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:30.022166  437269 type.go:168] "Request Body" body=""
	I1014 19:43:30.022260  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:30.022687  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:30.522272  437269 type.go:168] "Request Body" body=""
	I1014 19:43:30.522355  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:30.522747  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:31.021385  437269 type.go:168] "Request Body" body=""
	I1014 19:43:31.021484  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:31.021909  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:31.521491  437269 type.go:168] "Request Body" body=""
	I1014 19:43:31.521578  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:31.522023  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:32.021606  437269 type.go:168] "Request Body" body=""
	I1014 19:43:32.021692  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:32.022091  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:32.022172  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:32.521661  437269 type.go:168] "Request Body" body=""
	I1014 19:43:32.521740  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:32.522158  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:33.021717  437269 type.go:168] "Request Body" body=""
	I1014 19:43:33.021815  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:33.022209  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:33.521885  437269 type.go:168] "Request Body" body=""
	I1014 19:43:33.521973  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:33.522384  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:34.021211  437269 type.go:168] "Request Body" body=""
	I1014 19:43:34.021293  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:34.021699  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:34.521252  437269 type.go:168] "Request Body" body=""
	I1014 19:43:34.521332  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:34.521740  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:34.521854  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:35.021628  437269 type.go:168] "Request Body" body=""
	I1014 19:43:35.021734  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:35.022103  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:35.521777  437269 type.go:168] "Request Body" body=""
	I1014 19:43:35.521861  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:35.522282  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:36.021901  437269 type.go:168] "Request Body" body=""
	I1014 19:43:36.021991  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:36.022338  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:36.522081  437269 type.go:168] "Request Body" body=""
	I1014 19:43:36.522161  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:36.522532  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:36.522600  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:37.022222  437269 type.go:168] "Request Body" body=""
	I1014 19:43:37.022306  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:37.022680  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:37.521261  437269 type.go:168] "Request Body" body=""
	I1014 19:43:37.521365  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:37.521784  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:38.021342  437269 type.go:168] "Request Body" body=""
	I1014 19:43:38.021427  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:38.021897  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:38.521489  437269 type.go:168] "Request Body" body=""
	I1014 19:43:38.521583  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:38.521930  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:39.021573  437269 type.go:168] "Request Body" body=""
	I1014 19:43:39.021673  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:39.022106  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:39.022190  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:39.521695  437269 type.go:168] "Request Body" body=""
	I1014 19:43:39.521806  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:39.522190  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:40.022070  437269 type.go:168] "Request Body" body=""
	I1014 19:43:40.022155  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:40.022515  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:40.522191  437269 type.go:168] "Request Body" body=""
	I1014 19:43:40.522278  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:40.522665  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:41.021264  437269 type.go:168] "Request Body" body=""
	I1014 19:43:41.021347  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:41.021730  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:41.521285  437269 type.go:168] "Request Body" body=""
	I1014 19:43:41.521368  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:41.521747  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:41.521850  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:42.021332  437269 type.go:168] "Request Body" body=""
	I1014 19:43:42.021413  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:42.021835  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:42.521390  437269 type.go:168] "Request Body" body=""
	I1014 19:43:42.521492  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:42.521872  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:43.021448  437269 type.go:168] "Request Body" body=""
	I1014 19:43:43.021551  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:43.021984  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:43.521527  437269 type.go:168] "Request Body" body=""
	I1014 19:43:43.521610  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:43.521979  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:43.522054  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:44.021891  437269 type.go:168] "Request Body" body=""
	I1014 19:43:44.021982  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:44.022346  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:44.522015  437269 type.go:168] "Request Body" body=""
	I1014 19:43:44.522103  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:44.522480  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:45.021474  437269 type.go:168] "Request Body" body=""
	I1014 19:43:45.021561  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:45.021945  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:45.521543  437269 type.go:168] "Request Body" body=""
	I1014 19:43:45.521646  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:45.522059  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:45.522127  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:46.021638  437269 type.go:168] "Request Body" body=""
	I1014 19:43:46.021729  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:46.022191  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:46.521736  437269 type.go:168] "Request Body" body=""
	I1014 19:43:46.521839  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:46.522226  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:47.021891  437269 type.go:168] "Request Body" body=""
	I1014 19:43:47.021986  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:47.022382  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:47.522067  437269 type.go:168] "Request Body" body=""
	I1014 19:43:47.522151  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:47.522552  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:47.522621  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:48.022193  437269 type.go:168] "Request Body" body=""
	I1014 19:43:48.022285  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:48.022636  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:48.521224  437269 type.go:168] "Request Body" body=""
	I1014 19:43:48.521322  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:48.521716  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:49.021262  437269 type.go:168] "Request Body" body=""
	I1014 19:43:49.021340  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:49.021716  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:49.521334  437269 type.go:168] "Request Body" body=""
	I1014 19:43:49.521413  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:49.521823  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:50.021743  437269 type.go:168] "Request Body" body=""
	I1014 19:43:50.021874  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:50.022283  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:50.022349  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:50.521963  437269 type.go:168] "Request Body" body=""
	I1014 19:43:50.522049  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:50.522461  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:51.022176  437269 type.go:168] "Request Body" body=""
	I1014 19:43:51.022266  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:51.022629  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:51.522282  437269 type.go:168] "Request Body" body=""
	I1014 19:43:51.522383  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:51.522865  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:52.021416  437269 type.go:168] "Request Body" body=""
	I1014 19:43:52.021507  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:52.021884  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:52.521517  437269 type.go:168] "Request Body" body=""
	I1014 19:43:52.521611  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:52.522082  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:52.522155  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:53.021656  437269 type.go:168] "Request Body" body=""
	I1014 19:43:53.021742  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:53.022136  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:53.521806  437269 type.go:168] "Request Body" body=""
	I1014 19:43:53.521891  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:53.522261  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:54.022341  437269 type.go:168] "Request Body" body=""
	I1014 19:43:54.022440  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:54.022890  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:54.521448  437269 type.go:168] "Request Body" body=""
	I1014 19:43:54.521552  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:54.521966  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:55.021854  437269 type.go:168] "Request Body" body=""
	I1014 19:43:55.021934  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:55.022336  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:55.022402  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:55.521987  437269 type.go:168] "Request Body" body=""
	I1014 19:43:55.522071  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:55.522460  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:56.022232  437269 type.go:168] "Request Body" body=""
	I1014 19:43:56.022316  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:56.022653  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:56.521227  437269 type.go:168] "Request Body" body=""
	I1014 19:43:56.521302  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:56.521701  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:57.021269  437269 type.go:168] "Request Body" body=""
	I1014 19:43:57.021349  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:57.021719  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:57.521302  437269 type.go:168] "Request Body" body=""
	I1014 19:43:57.521398  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:57.521838  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:57.521899  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:58.021391  437269 type.go:168] "Request Body" body=""
	I1014 19:43:58.021485  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:58.021875  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:58.521454  437269 type.go:168] "Request Body" body=""
	I1014 19:43:58.521550  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:58.521987  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:59.021602  437269 type.go:168] "Request Body" body=""
	I1014 19:43:59.021701  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:59.022089  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:59.521704  437269 type.go:168] "Request Body" body=""
	I1014 19:43:59.521805  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:59.522205  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:59.522272  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:00.022040  437269 type.go:168] "Request Body" body=""
	I1014 19:44:00.022132  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:00.022504  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:00.522200  437269 type.go:168] "Request Body" body=""
	I1014 19:44:00.522297  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:00.522735  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:01.021297  437269 type.go:168] "Request Body" body=""
	I1014 19:44:01.021387  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:01.021784  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:01.521307  437269 type.go:168] "Request Body" body=""
	I1014 19:44:01.521399  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:01.521850  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:02.021406  437269 type.go:168] "Request Body" body=""
	I1014 19:44:02.021500  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:02.021877  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:02.021945  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:02.521436  437269 type.go:168] "Request Body" body=""
	I1014 19:44:02.521539  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:02.521953  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:03.021516  437269 type.go:168] "Request Body" body=""
	I1014 19:44:03.021598  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:03.022005  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:03.521561  437269 type.go:168] "Request Body" body=""
	I1014 19:44:03.521646  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:03.522077  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:04.021994  437269 type.go:168] "Request Body" body=""
	I1014 19:44:04.022079  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:04.022499  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:04.022572  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:04.522163  437269 type.go:168] "Request Body" body=""
	I1014 19:44:04.522255  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:04.522672  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:05.021565  437269 type.go:168] "Request Body" body=""
	I1014 19:44:05.021656  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:05.022053  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:05.521629  437269 type.go:168] "Request Body" body=""
	I1014 19:44:05.521713  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:05.522128  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:06.021689  437269 type.go:168] "Request Body" body=""
	I1014 19:44:06.021801  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:06.022188  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:06.521851  437269 type.go:168] "Request Body" body=""
	I1014 19:44:06.521937  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:06.522347  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:06.522417  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:07.022007  437269 type.go:168] "Request Body" body=""
	I1014 19:44:07.022086  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:07.022436  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:07.522203  437269 type.go:168] "Request Body" body=""
	I1014 19:44:07.522282  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:07.522638  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:08.021309  437269 type.go:168] "Request Body" body=""
	I1014 19:44:08.021397  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:08.021803  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:08.521985  437269 type.go:168] "Request Body" body=""
	I1014 19:44:08.522062  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:08.522422  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:08.522484  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:09.022109  437269 type.go:168] "Request Body" body=""
	I1014 19:44:09.022199  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:09.022550  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:09.522226  437269 type.go:168] "Request Body" body=""
	I1014 19:44:09.522312  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:09.522687  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:10.021566  437269 type.go:168] "Request Body" body=""
	I1014 19:44:10.021708  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:10.022064  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:10.521657  437269 type.go:168] "Request Body" body=""
	I1014 19:44:10.521776  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:10.522143  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:11.021701  437269 type.go:168] "Request Body" body=""
	I1014 19:44:11.021797  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:11.022127  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:11.022194  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:11.521807  437269 type.go:168] "Request Body" body=""
	I1014 19:44:11.521884  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:11.522263  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:12.021962  437269 type.go:168] "Request Body" body=""
	I1014 19:44:12.022049  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:12.022424  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:12.522133  437269 type.go:168] "Request Body" body=""
	I1014 19:44:12.522233  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:12.522615  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:13.022268  437269 type.go:168] "Request Body" body=""
	I1014 19:44:13.022358  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:13.022774  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:13.022845  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:13.521351  437269 type.go:168] "Request Body" body=""
	I1014 19:44:13.521431  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:13.521806  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:14.021818  437269 type.go:168] "Request Body" body=""
	I1014 19:44:14.021912  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:14.022342  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:14.522064  437269 type.go:168] "Request Body" body=""
	I1014 19:44:14.522156  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:14.522518  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:15.021381  437269 type.go:168] "Request Body" body=""
	I1014 19:44:15.021468  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:15.021826  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:15.521382  437269 type.go:168] "Request Body" body=""
	I1014 19:44:15.521487  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:15.521856  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:15.521934  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:16.021382  437269 type.go:168] "Request Body" body=""
	I1014 19:44:16.021472  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:16.021855  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:16.521402  437269 type.go:168] "Request Body" body=""
	I1014 19:44:16.521496  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:16.521958  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:17.021537  437269 type.go:168] "Request Body" body=""
	I1014 19:44:17.021618  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:17.022006  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:17.521572  437269 type.go:168] "Request Body" body=""
	I1014 19:44:17.521652  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:17.522068  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:17.522135  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:18.021636  437269 type.go:168] "Request Body" body=""
	I1014 19:44:18.021735  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:18.022112  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:18.521664  437269 type.go:168] "Request Body" body=""
	I1014 19:44:18.521790  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:18.522173  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:19.021791  437269 type.go:168] "Request Body" body=""
	I1014 19:44:19.021887  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:19.022264  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:19.521890  437269 type.go:168] "Request Body" body=""
	I1014 19:44:19.521989  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:19.522366  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:19.522432  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:20.022234  437269 type.go:168] "Request Body" body=""
	I1014 19:44:20.022313  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:20.022654  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:20.521239  437269 type.go:168] "Request Body" body=""
	I1014 19:44:20.521321  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:20.521737  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:21.021357  437269 type.go:168] "Request Body" body=""
	I1014 19:44:21.021447  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:21.021856  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:21.521454  437269 type.go:168] "Request Body" body=""
	I1014 19:44:21.521555  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:21.521969  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:22.021534  437269 type.go:168] "Request Body" body=""
	I1014 19:44:22.021630  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:22.022029  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:22.022098  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:22.521619  437269 type.go:168] "Request Body" body=""
	I1014 19:44:22.521729  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:22.522128  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:23.021712  437269 type.go:168] "Request Body" body=""
	I1014 19:44:23.021820  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:23.022176  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:23.521802  437269 type.go:168] "Request Body" body=""
	I1014 19:44:23.521885  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:23.522258  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:24.022112  437269 type.go:168] "Request Body" body=""
	I1014 19:44:24.022201  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:24.022532  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:24.022600  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:24.522195  437269 type.go:168] "Request Body" body=""
	I1014 19:44:24.522287  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:24.522634  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:25.021596  437269 type.go:168] "Request Body" body=""
	I1014 19:44:25.021676  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:25.022088  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:25.521654  437269 type.go:168] "Request Body" body=""
	I1014 19:44:25.521741  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:25.522131  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:26.021684  437269 type.go:168] "Request Body" body=""
	I1014 19:44:26.021798  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:26.022168  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:26.521801  437269 type.go:168] "Request Body" body=""
	I1014 19:44:26.521880  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:26.522232  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:26.522299  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:27.021847  437269 type.go:168] "Request Body" body=""
	I1014 19:44:27.021933  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:27.022292  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:27.521878  437269 type.go:168] "Request Body" body=""
	I1014 19:44:27.521963  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:27.522328  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:28.021519  437269 type.go:168] "Request Body" body=""
	I1014 19:44:28.021599  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:28.021968  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:28.521573  437269 type.go:168] "Request Body" body=""
	I1014 19:44:28.521667  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:28.522077  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:29.021709  437269 type.go:168] "Request Body" body=""
	I1014 19:44:29.021839  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:29.022235  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:29.022308  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:29.521910  437269 type.go:168] "Request Body" body=""
	I1014 19:44:29.522006  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:29.522371  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:30.021252  437269 type.go:168] "Request Body" body=""
	I1014 19:44:30.021348  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:30.021744  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:30.521308  437269 type.go:168] "Request Body" body=""
	I1014 19:44:30.521407  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:30.521858  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:31.021447  437269 type.go:168] "Request Body" body=""
	I1014 19:44:31.021537  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:31.021993  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:31.521577  437269 type.go:168] "Request Body" body=""
	I1014 19:44:31.521661  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:31.522091  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:31.522171  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:32.021679  437269 type.go:168] "Request Body" body=""
	I1014 19:44:32.021778  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:32.022180  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:32.521862  437269 type.go:168] "Request Body" body=""
	I1014 19:44:32.521962  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:32.522305  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:33.022031  437269 type.go:168] "Request Body" body=""
	I1014 19:44:33.022124  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:33.022484  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:33.522216  437269 type.go:168] "Request Body" body=""
	I1014 19:44:33.522294  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:33.522643  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:33.522730  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:34.021707  437269 type.go:168] "Request Body" body=""
	I1014 19:44:34.021853  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:34.022332  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:34.522025  437269 type.go:168] "Request Body" body=""
	I1014 19:44:34.522147  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:34.522536  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:35.021511  437269 type.go:168] "Request Body" body=""
	I1014 19:44:35.021620  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:35.022043  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:35.522236  437269 type.go:168] "Request Body" body=""
	I1014 19:44:35.522316  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:35.522681  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:36.021229  437269 type.go:168] "Request Body" body=""
	I1014 19:44:36.021313  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:36.021734  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:36.021830  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:36.521316  437269 type.go:168] "Request Body" body=""
	I1014 19:44:36.521393  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:36.521798  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:37.021352  437269 type.go:168] "Request Body" body=""
	I1014 19:44:37.021434  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:37.021888  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:37.521479  437269 type.go:168] "Request Body" body=""
	I1014 19:44:37.521566  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:37.521949  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:38.021522  437269 type.go:168] "Request Body" body=""
	I1014 19:44:38.021608  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:38.022020  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:38.022085  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:38.521582  437269 type.go:168] "Request Body" body=""
	I1014 19:44:38.521671  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:38.522063  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:39.021622  437269 type.go:168] "Request Body" body=""
	I1014 19:44:39.021702  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:39.022125  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:39.521740  437269 type.go:168] "Request Body" body=""
	I1014 19:44:39.521841  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:39.522231  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:40.022072  437269 type.go:168] "Request Body" body=""
	I1014 19:44:40.022157  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:40.022496  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:40.022560  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:40.522145  437269 type.go:168] "Request Body" body=""
	I1014 19:44:40.522230  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:40.522581  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:41.021191  437269 type.go:168] "Request Body" body=""
	I1014 19:44:41.021271  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:41.021663  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:41.521242  437269 type.go:168] "Request Body" body=""
	I1014 19:44:41.521325  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:41.521677  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:42.021221  437269 type.go:168] "Request Body" body=""
	I1014 19:44:42.021300  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:42.021721  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:42.521295  437269 type.go:168] "Request Body" body=""
	I1014 19:44:42.521377  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:42.521793  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:42.521860  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:43.021377  437269 type.go:168] "Request Body" body=""
	I1014 19:44:43.021470  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:43.021882  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:43.521445  437269 type.go:168] "Request Body" body=""
	I1014 19:44:43.521535  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:43.521905  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:44.021811  437269 type.go:168] "Request Body" body=""
	I1014 19:44:44.021903  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:44.022312  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:44.521977  437269 type.go:168] "Request Body" body=""
	I1014 19:44:44.522062  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:44.522405  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:44.522472  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:45.021229  437269 type.go:168] "Request Body" body=""
	I1014 19:44:45.021316  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:45.021700  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:45.521363  437269 type.go:168] "Request Body" body=""
	I1014 19:44:45.521476  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:45.521862  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:46.021400  437269 type.go:168] "Request Body" body=""
	I1014 19:44:46.021493  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:46.021898  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:46.521589  437269 type.go:168] "Request Body" body=""
	I1014 19:44:46.521682  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:46.522048  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:47.021649  437269 type.go:168] "Request Body" body=""
	I1014 19:44:47.021730  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:47.022119  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:47.022190  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:47.521670  437269 type.go:168] "Request Body" body=""
	I1014 19:44:47.521746  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:47.522086  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:48.021745  437269 type.go:168] "Request Body" body=""
	I1014 19:44:48.021850  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:48.022200  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:48.521828  437269 type.go:168] "Request Body" body=""
	I1014 19:44:48.521908  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:48.522263  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:49.021930  437269 type.go:168] "Request Body" body=""
	I1014 19:44:49.022025  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:49.022391  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:49.022471  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:49.522012  437269 type.go:168] "Request Body" body=""
	I1014 19:44:49.522093  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:49.522436  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:50.021280  437269 type.go:168] "Request Body" body=""
	I1014 19:44:50.021359  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:50.021746  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:50.521299  437269 type.go:168] "Request Body" body=""
	I1014 19:44:50.521381  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:50.521749  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:51.021292  437269 type.go:168] "Request Body" body=""
	I1014 19:44:51.021375  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:51.021830  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:51.521389  437269 type.go:168] "Request Body" body=""
	I1014 19:44:51.521483  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:51.521862  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:51.521938  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:52.021392  437269 type.go:168] "Request Body" body=""
	I1014 19:44:52.021501  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:52.021933  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:52.521524  437269 type.go:168] "Request Body" body=""
	I1014 19:44:52.521606  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:52.522002  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:53.021549  437269 type.go:168] "Request Body" body=""
	I1014 19:44:53.021630  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:53.022033  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:53.521638  437269 type.go:168] "Request Body" body=""
	I1014 19:44:53.521719  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:53.522129  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:53.522202  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:54.022063  437269 type.go:168] "Request Body" body=""
	I1014 19:44:54.022155  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:54.022563  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:54.522249  437269 type.go:168] "Request Body" body=""
	I1014 19:44:54.522346  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:54.522749  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:55.021666  437269 type.go:168] "Request Body" body=""
	I1014 19:44:55.021750  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:55.022126  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:55.521738  437269 type.go:168] "Request Body" body=""
	I1014 19:44:55.521847  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:55.522237  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:55.522304  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:56.021875  437269 type.go:168] "Request Body" body=""
	I1014 19:44:56.021958  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:56.022317  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:56.521953  437269 type.go:168] "Request Body" body=""
	I1014 19:44:56.522031  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:56.522402  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:57.022099  437269 type.go:168] "Request Body" body=""
	I1014 19:44:57.022184  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:57.022571  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:57.522215  437269 type.go:168] "Request Body" body=""
	I1014 19:44:57.522295  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:57.522635  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:57.522721  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:58.021247  437269 type.go:168] "Request Body" body=""
	I1014 19:44:58.021331  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:58.021778  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:58.521330  437269 type.go:168] "Request Body" body=""
	I1014 19:44:58.521406  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:58.521792  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:59.021307  437269 type.go:168] "Request Body" body=""
	I1014 19:44:59.021390  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:59.021783  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:59.521317  437269 type.go:168] "Request Body" body=""
	I1014 19:44:59.521404  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:59.521833  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:00.021727  437269 type.go:168] "Request Body" body=""
	I1014 19:45:00.021828  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:00.022220  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:00.022290  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:00.521874  437269 type.go:168] "Request Body" body=""
	I1014 19:45:00.521969  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:00.522342  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:01.022108  437269 type.go:168] "Request Body" body=""
	I1014 19:45:01.022195  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:01.022598  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:01.521221  437269 type.go:168] "Request Body" body=""
	I1014 19:45:01.521312  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:01.521684  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:02.021247  437269 type.go:168] "Request Body" body=""
	I1014 19:45:02.021345  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:02.021741  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:02.521281  437269 type.go:168] "Request Body" body=""
	I1014 19:45:02.521368  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:02.521783  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:02.521850  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:03.021427  437269 type.go:168] "Request Body" body=""
	I1014 19:45:03.021538  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:03.022017  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:03.521576  437269 type.go:168] "Request Body" body=""
	I1014 19:45:03.521665  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:03.522065  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:04.021968  437269 type.go:168] "Request Body" body=""
	I1014 19:45:04.022064  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:04.022412  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:04.522089  437269 type.go:168] "Request Body" body=""
	I1014 19:45:04.522186  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:04.522588  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:04.522669  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:05.021532  437269 type.go:168] "Request Body" body=""
	I1014 19:45:05.021627  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:05.022032  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:05.521660  437269 type.go:168] "Request Body" body=""
	I1014 19:45:05.521743  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:05.522144  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:06.021836  437269 type.go:168] "Request Body" body=""
	I1014 19:45:06.021915  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:06.022313  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:06.522006  437269 type.go:168] "Request Body" body=""
	I1014 19:45:06.522090  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:06.522505  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:07.022194  437269 type.go:168] "Request Body" body=""
	I1014 19:45:07.022282  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:07.022657  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:07.022726  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:07.522255  437269 type.go:168] "Request Body" body=""
	I1014 19:45:07.522341  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:07.522733  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:08.021293  437269 type.go:168] "Request Body" body=""
	I1014 19:45:08.021376  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:08.021784  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:08.521329  437269 type.go:168] "Request Body" body=""
	I1014 19:45:08.521407  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:08.521815  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:09.021335  437269 type.go:168] "Request Body" body=""
	I1014 19:45:09.021426  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:09.021821  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:09.521354  437269 type.go:168] "Request Body" body=""
	I1014 19:45:09.521433  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:09.521870  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:09.521948  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:10.021750  437269 type.go:168] "Request Body" body=""
	I1014 19:45:10.021864  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:10.022248  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:10.521887  437269 type.go:168] "Request Body" body=""
	I1014 19:45:10.521973  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:10.522362  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:11.022015  437269 type.go:168] "Request Body" body=""
	I1014 19:45:11.022096  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:11.022432  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:11.522073  437269 type.go:168] "Request Body" body=""
	I1014 19:45:11.522158  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:11.522547  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:11.522623  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:12.022259  437269 type.go:168] "Request Body" body=""
	I1014 19:45:12.022347  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:12.022850  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:12.521359  437269 type.go:168] "Request Body" body=""
	I1014 19:45:12.521448  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:12.521849  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:13.021409  437269 type.go:168] "Request Body" body=""
	I1014 19:45:13.021494  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:13.021916  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:13.521532  437269 type.go:168] "Request Body" body=""
	I1014 19:45:13.521618  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:13.521981  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:14.021969  437269 type.go:168] "Request Body" body=""
	I1014 19:45:14.022049  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:14.022447  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:14.022510  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:14.522094  437269 type.go:168] "Request Body" body=""
	I1014 19:45:14.522176  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:14.522545  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:15.021509  437269 type.go:168] "Request Body" body=""
	I1014 19:45:15.021606  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:15.022033  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:15.521593  437269 type.go:168] "Request Body" body=""
	I1014 19:45:15.521690  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:15.522096  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:16.021646  437269 type.go:168] "Request Body" body=""
	I1014 19:45:16.021736  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:16.022135  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:16.521804  437269 type.go:168] "Request Body" body=""
	I1014 19:45:16.521890  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:16.522248  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:16.522324  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:17.021975  437269 type.go:168] "Request Body" body=""
	I1014 19:45:17.022056  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:17.022447  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:17.522108  437269 type.go:168] "Request Body" body=""
	I1014 19:45:17.522191  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:17.522594  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:18.022251  437269 type.go:168] "Request Body" body=""
	I1014 19:45:18.022333  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:18.022725  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:18.521289  437269 type.go:168] "Request Body" body=""
	I1014 19:45:18.521376  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:18.521812  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:19.021383  437269 type.go:168] "Request Body" body=""
	I1014 19:45:19.021484  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:19.021904  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:19.021980  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:19.521516  437269 type.go:168] "Request Body" body=""
	I1014 19:45:19.521604  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:19.522056  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:20.021651  437269 type.go:168] "Request Body" body=""
	I1014 19:45:20.021732  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:20.022182  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:20.521732  437269 type.go:168] "Request Body" body=""
	I1014 19:45:20.521838  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:20.522198  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:21.021907  437269 type.go:168] "Request Body" body=""
	I1014 19:45:21.021987  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:21.022351  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:21.022430  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:21.521976  437269 type.go:168] "Request Body" body=""
	I1014 19:45:21.522056  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:21.522417  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:22.022086  437269 type.go:168] "Request Body" body=""
	I1014 19:45:22.022170  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:22.022544  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:22.522193  437269 type.go:168] "Request Body" body=""
	I1014 19:45:22.522282  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:22.522668  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:23.021253  437269 type.go:168] "Request Body" body=""
	I1014 19:45:23.021333  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:23.021784  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:23.521356  437269 type.go:168] "Request Body" body=""
	I1014 19:45:23.521450  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:23.521977  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:23.522059  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:24.021741  437269 type.go:168] "Request Body" body=""
	I1014 19:45:24.021842  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:24.022224  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:24.521890  437269 type.go:168] "Request Body" body=""
	I1014 19:45:24.521984  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:24.522357  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:25.022258  437269 type.go:168] "Request Body" body=""
	I1014 19:45:25.022360  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:25.022739  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:25.521985  437269 type.go:168] "Request Body" body=""
	I1014 19:45:25.522068  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:25.522428  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:25.522491  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:26.022071  437269 type.go:168] "Request Body" body=""
	I1014 19:45:26.022170  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:26.022519  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:26.521198  437269 type.go:168] "Request Body" body=""
	I1014 19:45:26.521288  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:26.521676  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:27.021978  437269 type.go:168] "Request Body" body=""
	I1014 19:45:27.022083  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:27.022419  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:27.522151  437269 type.go:168] "Request Body" body=""
	I1014 19:45:27.522230  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:27.522643  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:27.522714  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:28.021218  437269 type.go:168] "Request Body" body=""
	I1014 19:45:28.021312  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:28.021730  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:28.521312  437269 type.go:168] "Request Body" body=""
	I1014 19:45:28.521403  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:28.521840  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:29.021354  437269 type.go:168] "Request Body" body=""
	I1014 19:45:29.021435  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:29.021854  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:29.521378  437269 type.go:168] "Request Body" body=""
	I1014 19:45:29.521458  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:29.521850  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:30.021662  437269 type.go:168] "Request Body" body=""
	I1014 19:45:30.021789  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:30.022146  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:30.022213  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:30.521738  437269 type.go:168] "Request Body" body=""
	I1014 19:45:30.521833  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:30.522211  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:31.021880  437269 type.go:168] "Request Body" body=""
	I1014 19:45:31.021993  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:31.022332  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:31.522123  437269 type.go:168] "Request Body" body=""
	I1014 19:45:31.522204  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:31.522575  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:32.022205  437269 type.go:168] "Request Body" body=""
	I1014 19:45:32.022295  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:32.022647  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:32.022725  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:32.521198  437269 type.go:168] "Request Body" body=""
	I1014 19:45:32.521290  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:32.521668  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:33.021206  437269 type.go:168] "Request Body" body=""
	I1014 19:45:33.021284  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:33.021669  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:33.521252  437269 type.go:168] "Request Body" body=""
	I1014 19:45:33.521335  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:33.521732  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:34.021648  437269 type.go:168] "Request Body" body=""
	I1014 19:45:34.021738  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:34.022124  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:34.521677  437269 type.go:168] "Request Body" body=""
	I1014 19:45:34.521786  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:34.522167  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:34.522228  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:35.021984  437269 type.go:168] "Request Body" body=""
	I1014 19:45:35.022074  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:35.022422  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:35.522074  437269 type.go:168] "Request Body" body=""
	I1014 19:45:35.522161  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:35.522560  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:36.022246  437269 type.go:168] "Request Body" body=""
	I1014 19:45:36.022332  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:36.022735  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:36.521326  437269 type.go:168] "Request Body" body=""
	I1014 19:45:36.521412  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:36.521843  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:37.021388  437269 type.go:168] "Request Body" body=""
	I1014 19:45:37.021485  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:37.021891  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:37.021957  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:37.521503  437269 type.go:168] "Request Body" body=""
	I1014 19:45:37.521585  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:37.522005  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:38.021579  437269 type.go:168] "Request Body" body=""
	I1014 19:45:38.021679  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:38.022059  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:38.521663  437269 type.go:168] "Request Body" body=""
	I1014 19:45:38.521751  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:38.522160  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:39.021909  437269 type.go:168] "Request Body" body=""
	I1014 19:45:39.021996  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:39.022378  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:39.022449  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:39.522030  437269 type.go:168] "Request Body" body=""
	I1014 19:45:39.522107  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:39.522416  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:40.021388  437269 type.go:168] "Request Body" body=""
	I1014 19:45:40.021481  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:40.021844  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:40.521422  437269 type.go:168] "Request Body" body=""
	I1014 19:45:40.521523  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:40.521966  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:41.021564  437269 type.go:168] "Request Body" body=""
	I1014 19:45:41.021641  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:41.022031  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:41.521648  437269 type.go:168] "Request Body" body=""
	I1014 19:45:41.521734  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:41.522167  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:41.522236  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:42.021731  437269 type.go:168] "Request Body" body=""
	I1014 19:45:42.021836  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:42.022192  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:42.521731  437269 type.go:168] "Request Body" body=""
	I1014 19:45:42.521839  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:42.522217  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:43.021906  437269 type.go:168] "Request Body" body=""
	I1014 19:45:43.021993  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:43.022331  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:43.522111  437269 type.go:168] "Request Body" body=""
	I1014 19:45:43.522198  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:43.522589  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:43.522675  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:44.021291  437269 type.go:168] "Request Body" body=""
	I1014 19:45:44.021372  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:44.021800  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:44.521363  437269 type.go:168] "Request Body" body=""
	I1014 19:45:44.521449  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:44.521869  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:45.021752  437269 type.go:168] "Request Body" body=""
	I1014 19:45:45.021850  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:45.022233  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:45.521855  437269 type.go:168] "Request Body" body=""
	I1014 19:45:45.521941  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:45.522316  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:46.022006  437269 type.go:168] "Request Body" body=""
	I1014 19:45:46.022095  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:46.022499  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:46.022579  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:46.522210  437269 type.go:168] "Request Body" body=""
	I1014 19:45:46.522318  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:46.522722  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:47.021283  437269 type.go:168] "Request Body" body=""
	I1014 19:45:47.021385  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:47.021781  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:47.521429  437269 type.go:168] "Request Body" body=""
	I1014 19:45:47.521536  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:47.521995  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:48.021575  437269 type.go:168] "Request Body" body=""
	I1014 19:45:48.021686  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:48.022099  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:48.521787  437269 type.go:168] "Request Body" body=""
	I1014 19:45:48.521871  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:48.522261  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:48.522369  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:49.021944  437269 type.go:168] "Request Body" body=""
	I1014 19:45:49.022027  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:49.022513  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:49.522168  437269 type.go:168] "Request Body" body=""
	I1014 19:45:49.522247  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:49.522598  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:50.021501  437269 type.go:168] "Request Body" body=""
	I1014 19:45:50.021615  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:50.022004  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:50.521581  437269 type.go:168] "Request Body" body=""
	I1014 19:45:50.521669  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:50.522045  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:51.021656  437269 type.go:168] "Request Body" body=""
	I1014 19:45:51.021788  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:51.022144  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:51.022212  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:51.521847  437269 type.go:168] "Request Body" body=""
	I1014 19:45:51.521925  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:51.522299  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:52.022088  437269 type.go:168] "Request Body" body=""
	I1014 19:45:52.022197  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:52.022587  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:52.522247  437269 type.go:168] "Request Body" body=""
	I1014 19:45:52.522330  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:52.522658  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:53.021334  437269 type.go:168] "Request Body" body=""
	I1014 19:45:53.021438  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:53.021860  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:53.521371  437269 type.go:168] "Request Body" body=""
	I1014 19:45:53.521458  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:53.521812  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:53.521887  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:54.021737  437269 type.go:168] "Request Body" body=""
	I1014 19:45:54.021853  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:54.022236  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:54.521871  437269 type.go:168] "Request Body" body=""
	I1014 19:45:54.521952  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:54.522300  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:55.022188  437269 type.go:168] "Request Body" body=""
	I1014 19:45:55.022267  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:55.022698  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:55.521299  437269 type.go:168] "Request Body" body=""
	I1014 19:45:55.521387  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:55.521745  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:56.021324  437269 type.go:168] "Request Body" body=""
	I1014 19:45:56.021405  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:56.021853  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:56.021933  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:56.521381  437269 type.go:168] "Request Body" body=""
	I1014 19:45:56.521492  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:56.521856  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:57.021449  437269 type.go:168] "Request Body" body=""
	I1014 19:45:57.021569  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:57.022053  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:57.521631  437269 type.go:168] "Request Body" body=""
	I1014 19:45:57.521711  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:57.522096  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:58.021695  437269 type.go:168] "Request Body" body=""
	I1014 19:45:58.021812  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:58.022220  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:58.022300  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:58.521874  437269 type.go:168] "Request Body" body=""
	I1014 19:45:58.521965  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:58.522333  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:59.021991  437269 type.go:168] "Request Body" body=""
	I1014 19:45:59.022083  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:59.022475  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:59.522167  437269 type.go:168] "Request Body" body=""
	I1014 19:45:59.522245  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:59.522597  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:00.021599  437269 type.go:168] "Request Body" body=""
	I1014 19:46:00.021701  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:00.022127  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:00.521743  437269 type.go:168] "Request Body" body=""
	I1014 19:46:00.521861  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:00.522238  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:46:00.522338  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:46:01.022015  437269 type.go:168] "Request Body" body=""
	I1014 19:46:01.022109  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:01.022496  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:01.522199  437269 type.go:168] "Request Body" body=""
	I1014 19:46:01.522284  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:01.522792  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:02.021313  437269 type.go:168] "Request Body" body=""
	I1014 19:46:02.021414  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:02.021802  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:02.521355  437269 type.go:168] "Request Body" body=""
	I1014 19:46:02.521435  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:02.521837  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:03.021400  437269 type.go:168] "Request Body" body=""
	I1014 19:46:03.021512  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:03.021843  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:46:03.021936  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:46:03.521495  437269 type.go:168] "Request Body" body=""
	I1014 19:46:03.521638  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:03.522055  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:04.022126  437269 type.go:168] "Request Body" body=""
	I1014 19:46:04.022216  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:04.022594  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:04.522216  437269 type.go:168] "Request Body" body=""
	I1014 19:46:04.522303  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:04.522679  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:05.021591  437269 type.go:168] "Request Body" body=""
	I1014 19:46:05.021704  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:05.022095  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:46:05.022161  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:46:05.521689  437269 type.go:168] "Request Body" body=""
	I1014 19:46:05.521808  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:05.522192  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:06.021790  437269 type.go:168] "Request Body" body=""
	I1014 19:46:06.021897  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:06.022280  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:06.521951  437269 type.go:168] "Request Body" body=""
	I1014 19:46:06.522040  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:06.522397  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:07.022069  437269 type.go:168] "Request Body" body=""
	I1014 19:46:07.022173  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:07.022542  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:46:07.022606  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:46:07.522218  437269 type.go:168] "Request Body" body=""
	I1014 19:46:07.522298  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:07.522637  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:08.021220  437269 type.go:168] "Request Body" body=""
	I1014 19:46:08.021314  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:08.021696  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:08.521279  437269 type.go:168] "Request Body" body=""
	I1014 19:46:08.521359  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:08.521778  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:09.021343  437269 type.go:168] "Request Body" body=""
	I1014 19:46:09.021451  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:09.021866  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:09.521382  437269 type.go:168] "Request Body" body=""
	I1014 19:46:09.521459  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:09.521838  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:46:09.521913  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:46:10.021664  437269 type.go:168] "Request Body" body=""
	I1014 19:46:10.021744  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:10.022128  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:10.521668  437269 type.go:168] "Request Body" body=""
	I1014 19:46:10.521745  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:10.522134  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:11.021709  437269 type.go:168] "Request Body" body=""
	I1014 19:46:11.021817  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:11.022226  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:11.521863  437269 type.go:168] "Request Body" body=""
	I1014 19:46:11.521950  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:11.522316  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:46:11.522391  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:46:12.022004  437269 type.go:168] "Request Body" body=""
	I1014 19:46:12.022083  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:12.022466  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:12.522152  437269 type.go:168] "Request Body" body=""
	I1014 19:46:12.522231  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:12.522572  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:13.022208  437269 type.go:168] "Request Body" body=""
	I1014 19:46:13.022306  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:13.022686  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:13.521212  437269 type.go:168] "Request Body" body=""
	I1014 19:46:13.521286  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:13.521620  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:14.021358  437269 type.go:168] "Request Body" body=""
	I1014 19:46:14.021443  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:14.021869  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:46:14.021948  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:46:14.521427  437269 type.go:168] "Request Body" body=""
	I1014 19:46:14.521526  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:14.521830  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:15.021689  437269 type.go:168] "Request Body" body=""
	I1014 19:46:15.021842  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:15.022202  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:15.521922  437269 type.go:168] "Request Body" body=""
	I1014 19:46:15.522020  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:15.522429  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:16.022119  437269 type.go:168] "Request Body" body=""
	I1014 19:46:16.022199  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:16.022517  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:46:16.022586  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:46:16.521207  437269 type.go:168] "Request Body" body=""
	I1014 19:46:16.521315  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:16.521711  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:17.021272  437269 type.go:168] "Request Body" body=""
	I1014 19:46:17.021355  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:17.021723  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:17.521289  437269 type.go:168] "Request Body" body=""
	I1014 19:46:17.521390  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:17.521811  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:18.021359  437269 type.go:168] "Request Body" body=""
	I1014 19:46:18.021443  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:18.021849  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:18.521429  437269 type.go:168] "Request Body" body=""
	I1014 19:46:18.521529  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:18.521905  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:46:18.521988  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:46:19.021521  437269 type.go:168] "Request Body" body=""
	I1014 19:46:19.021615  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:19.022010  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:19.521715  437269 type.go:168] "Request Body" body=""
	I1014 19:46:19.521866  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:19.522297  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:20.022176  437269 type.go:168] "Request Body" body=""
	I1014 19:46:20.022258  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:20.022646  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:20.522243  437269 type.go:168] "Request Body" body=""
	I1014 19:46:20.522333  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:20.522713  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:46:20.522805  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:46:21.021280  437269 type.go:168] "Request Body" body=""
	I1014 19:46:21.021386  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:21.021805  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:21.521347  437269 type.go:168] "Request Body" body=""
	I1014 19:46:21.521438  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:21.521811  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:22.021364  437269 type.go:168] "Request Body" body=""
	I1014 19:46:22.021456  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:22.021861  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:22.521399  437269 type.go:168] "Request Body" body=""
	I1014 19:46:22.521520  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:22.521917  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:23.021531  437269 type.go:168] "Request Body" body=""
	I1014 19:46:23.021637  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:23.022036  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:46:23.022100  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:46:23.521619  437269 type.go:168] "Request Body" body=""
	I1014 19:46:23.521711  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:23.522062  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:24.021884  437269 type.go:168] "Request Body" body=""
	I1014 19:46:24.021977  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:24.022350  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:24.522011  437269 type.go:168] "Request Body" body=""
	I1014 19:46:24.522097  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:24.522508  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:25.021512  437269 type.go:168] "Request Body" body=""
	I1014 19:46:25.021596  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:25.022033  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:25.521632  437269 type.go:168] "Request Body" body=""
	I1014 19:46:25.521726  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:25.522148  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:46:25.522244  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:46:26.021740  437269 type.go:168] "Request Body" body=""
	I1014 19:46:26.021850  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:26.022219  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:26.521873  437269 type.go:168] "Request Body" body=""
	I1014 19:46:26.521956  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:26.522372  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:27.022036  437269 type.go:168] "Request Body" body=""
	I1014 19:46:27.022129  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:27.022489  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:27.522188  437269 type.go:168] "Request Body" body=""
	I1014 19:46:27.522279  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:27.522655  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:46:27.522745  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:46:28.021236  437269 type.go:168] "Request Body" body=""
	I1014 19:46:28.021317  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:28.021676  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:28.521949  437269 type.go:168] "Request Body" body=""
	I1014 19:46:28.522027  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:28.522409  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:29.022101  437269 type.go:168] "Request Body" body=""
	I1014 19:46:29.022190  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:29.022539  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:29.522171  437269 type.go:168] "Request Body" body=""
	I1014 19:46:29.522256  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:29.522639  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:30.021643  437269 type.go:168] "Request Body" body=""
	I1014 19:46:30.021778  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:30.022144  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:46:30.022208  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:46:30.521811  437269 type.go:168] "Request Body" body=""
	I1014 19:46:30.521894  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:30.522289  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:31.022066  437269 type.go:168] "Request Body" body=""
	I1014 19:46:31.022164  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:31.022558  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:31.522208  437269 type.go:168] "Request Body" body=""
	I1014 19:46:31.522295  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:31.522719  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:32.021314  437269 type.go:168] "Request Body" body=""
	I1014 19:46:32.021414  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:32.021832  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:32.521364  437269 type.go:168] "Request Body" body=""
	I1014 19:46:32.521461  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:32.521854  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:46:32.521920  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:46:33.021401  437269 type.go:168] "Request Body" body=""
	I1014 19:46:33.021513  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:33.022010  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:33.521545  437269 type.go:168] "Request Body" body=""
	I1014 19:46:33.521653  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:33.522075  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:34.021736  437269 type.go:168] "Request Body" body=""
	I1014 19:46:34.022027  437269 node_ready.go:38] duration metric: took 6m0.00093705s for node "functional-744288" to be "Ready" ...
	I1014 19:46:34.025220  437269 out.go:203] 
	W1014 19:46:34.026860  437269 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1014 19:46:34.026878  437269 out.go:285] * 
	W1014 19:46:34.028574  437269 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 19:46:34.030019  437269 out.go:203] 
	
	
	==> CRI-O <==
	Oct 14 19:46:25 functional-744288 crio[2959]: time="2025-10-14T19:46:25.865897028Z" level=info msg="createCtr: removing container ccfc95ec370c10a716864fba39534e209cf0a9312e0db89b974a3376ffb370eb" id=86cb8549-6226-44a6-bfd3-04e5ed39afcd name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:46:25 functional-744288 crio[2959]: time="2025-10-14T19:46:25.865934422Z" level=info msg="createCtr: deleting container ccfc95ec370c10a716864fba39534e209cf0a9312e0db89b974a3376ffb370eb from storage" id=86cb8549-6226-44a6-bfd3-04e5ed39afcd name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:46:25 functional-744288 crio[2959]: time="2025-10-14T19:46:25.868294101Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-functional-744288_kube-system_b1fd55382fcf5a735f17d7c6c4ddad91_0" id=86cb8549-6226-44a6-bfd3-04e5ed39afcd name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:46:27 functional-744288 crio[2959]: time="2025-10-14T19:46:27.836445983Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=6720772a-365f-4781-b1cb-e939e61a06dd name=/runtime.v1.ImageService/ImageStatus
	Oct 14 19:46:27 functional-744288 crio[2959]: time="2025-10-14T19:46:27.83732868Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=34baf7b3-8454-46e1-a99d-960fa0cd9960 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 19:46:27 functional-744288 crio[2959]: time="2025-10-14T19:46:27.838169422Z" level=info msg="Creating container: kube-system/etcd-functional-744288/etcd" id=9740332f-0811-4b73-9383-c46c5fe0835e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:46:27 functional-744288 crio[2959]: time="2025-10-14T19:46:27.838395543Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 19:46:27 functional-744288 crio[2959]: time="2025-10-14T19:46:27.841772406Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 19:46:27 functional-744288 crio[2959]: time="2025-10-14T19:46:27.842221085Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 19:46:27 functional-744288 crio[2959]: time="2025-10-14T19:46:27.858687002Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=9740332f-0811-4b73-9383-c46c5fe0835e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:46:27 functional-744288 crio[2959]: time="2025-10-14T19:46:27.860065361Z" level=info msg="createCtr: deleting container ID 9270548b27f70a937f8292953c95d0e27e84d0b0e7f88e9c1caa4e28f165c013 from idIndex" id=9740332f-0811-4b73-9383-c46c5fe0835e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:46:27 functional-744288 crio[2959]: time="2025-10-14T19:46:27.860102297Z" level=info msg="createCtr: removing container 9270548b27f70a937f8292953c95d0e27e84d0b0e7f88e9c1caa4e28f165c013" id=9740332f-0811-4b73-9383-c46c5fe0835e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:46:27 functional-744288 crio[2959]: time="2025-10-14T19:46:27.860131936Z" level=info msg="createCtr: deleting container 9270548b27f70a937f8292953c95d0e27e84d0b0e7f88e9c1caa4e28f165c013 from storage" id=9740332f-0811-4b73-9383-c46c5fe0835e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:46:27 functional-744288 crio[2959]: time="2025-10-14T19:46:27.862154889Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-functional-744288_kube-system_07f65d41bdafe0b0f1a2009eadad0a38_0" id=9740332f-0811-4b73-9383-c46c5fe0835e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:46:33 functional-744288 crio[2959]: time="2025-10-14T19:46:33.836454753Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=9f0e354c-f410-49cf-b40b-5dc3a2f068d1 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 19:46:33 functional-744288 crio[2959]: time="2025-10-14T19:46:33.837508308Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=ecf135b6-fb09-4b1e-818a-ea425e1d5802 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 19:46:33 functional-744288 crio[2959]: time="2025-10-14T19:46:33.838541155Z" level=info msg="Creating container: kube-system/kube-apiserver-functional-744288/kube-apiserver" id=fa2f23ff-1580-4db8-ab00-5272f74c53b2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:46:33 functional-744288 crio[2959]: time="2025-10-14T19:46:33.83878557Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 19:46:33 functional-744288 crio[2959]: time="2025-10-14T19:46:33.842384767Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 19:46:33 functional-744288 crio[2959]: time="2025-10-14T19:46:33.84286761Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 19:46:33 functional-744288 crio[2959]: time="2025-10-14T19:46:33.863270708Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=fa2f23ff-1580-4db8-ab00-5272f74c53b2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:46:33 functional-744288 crio[2959]: time="2025-10-14T19:46:33.864875307Z" level=info msg="createCtr: deleting container ID 7ce193f3d90a0164dc5f8a119bedab1855a8d1ceee719b1104fb805a11139ec2 from idIndex" id=fa2f23ff-1580-4db8-ab00-5272f74c53b2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:46:33 functional-744288 crio[2959]: time="2025-10-14T19:46:33.864933692Z" level=info msg="createCtr: removing container 7ce193f3d90a0164dc5f8a119bedab1855a8d1ceee719b1104fb805a11139ec2" id=fa2f23ff-1580-4db8-ab00-5272f74c53b2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:46:33 functional-744288 crio[2959]: time="2025-10-14T19:46:33.864984536Z" level=info msg="createCtr: deleting container 7ce193f3d90a0164dc5f8a119bedab1855a8d1ceee719b1104fb805a11139ec2 from storage" id=fa2f23ff-1580-4db8-ab00-5272f74c53b2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:46:33 functional-744288 crio[2959]: time="2025-10-14T19:46:33.867380722Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-functional-744288_kube-system_7dacb23619ff0889511bcb2e81339e77_0" id=fa2f23ff-1580-4db8-ab00-5272f74c53b2 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:46:38.069625    4510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:46:38.070300    4510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:46:38.071959    4510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:46:38.072456    4510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:46:38.073792    4510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 91 42 a8 46 f5 08 06
	[ +33.952077] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff aa 17 60 38 7c 63 08 06
	[  +0.961658] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ba 77 81 98 05 a1 08 06
	[  +0.043334] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 8a 2f 57 55 4c d7 08 06
	[  +4.681081] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f6 ac 51 a4 b2 5a 08 06
	[Oct14 19:04] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e6 d1 de e1 85 25 08 06
	[  +0.899784] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ae f2 fe 51 3d 95 08 06
	[  +0.040749] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 82 c6 36 89 b8 4a 08 06
	[  +4.162603] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c6 62 d5 5c 0e ef 08 06
	[ +30.801371] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 06 ae e5 28 c7 39 08 06
	[  +0.817897] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ae 5e 5e 90 62 1a 08 06
	[  +0.046959] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 96 8f 1b bd b9 9f 08 06
	[Oct14 19:05] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 56 59 c3 7c db c4 08 06
	
	
	==> kernel <==
	 19:46:38 up  2:29,  0 user,  load average: 0.11, 0.06, 2.26
	Linux functional-744288 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 14 19:46:27 functional-744288 kubelet[1809]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 14 19:46:27 functional-744288 kubelet[1809]:  > podSandboxID="de75312ccca355aabaabb18a5eb1e6d7a7e4d5b3fb088ce1c5eb28a39d567355"
	Oct 14 19:46:27 functional-744288 kubelet[1809]: E1014 19:46:27.862531    1809 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 14 19:46:27 functional-744288 kubelet[1809]:         container etcd start failed in pod etcd-functional-744288_kube-system(07f65d41bdafe0b0f1a2009eadad0a38): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 14 19:46:27 functional-744288 kubelet[1809]:  > logger="UnhandledError"
	Oct 14 19:46:27 functional-744288 kubelet[1809]: E1014 19:46:27.862564    1809 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-functional-744288" podUID="07f65d41bdafe0b0f1a2009eadad0a38"
	Oct 14 19:46:27 functional-744288 kubelet[1809]: E1014 19:46:27.883708    1809 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-744288\" not found"
	Oct 14 19:46:28 functional-744288 kubelet[1809]: E1014 19:46:28.516743    1809 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-744288?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Oct 14 19:46:28 functional-744288 kubelet[1809]: I1014 19:46:28.739368    1809 kubelet_node_status.go:75] "Attempting to register node" node="functional-744288"
	Oct 14 19:46:28 functional-744288 kubelet[1809]: E1014 19:46:28.739808    1809 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-744288"
	Oct 14 19:46:33 functional-744288 kubelet[1809]: E1014 19:46:33.835893    1809 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-744288\" not found" node="functional-744288"
	Oct 14 19:46:33 functional-744288 kubelet[1809]: E1014 19:46:33.867767    1809 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 14 19:46:33 functional-744288 kubelet[1809]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 14 19:46:33 functional-744288 kubelet[1809]:  > podSandboxID="d501fdff2b92902ecd1a22b235a50d225f771b04701776d8a1bb0e78b9481d1c"
	Oct 14 19:46:33 functional-744288 kubelet[1809]: E1014 19:46:33.867885    1809 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 14 19:46:33 functional-744288 kubelet[1809]:         container kube-apiserver start failed in pod kube-apiserver-functional-744288_kube-system(7dacb23619ff0889511bcb2e81339e77): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 14 19:46:33 functional-744288 kubelet[1809]:  > logger="UnhandledError"
	Oct 14 19:46:33 functional-744288 kubelet[1809]: E1014 19:46:33.867920    1809 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-functional-744288" podUID="7dacb23619ff0889511bcb2e81339e77"
	Oct 14 19:46:34 functional-744288 kubelet[1809]: E1014 19:46:34.372128    1809 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	Oct 14 19:46:34 functional-744288 kubelet[1809]: E1014 19:46:34.925423    1809 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://192.168.49.2:8441/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError"
	Oct 14 19:46:35 functional-744288 kubelet[1809]: E1014 19:46:35.517271    1809 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-744288?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Oct 14 19:46:35 functional-744288 kubelet[1809]: I1014 19:46:35.741651    1809 kubelet_node_status.go:75] "Attempting to register node" node="functional-744288"
	Oct 14 19:46:35 functional-744288 kubelet[1809]: E1014 19:46:35.742085    1809 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-744288"
	Oct 14 19:46:37 functional-744288 kubelet[1809]: E1014 19:46:37.101672    1809 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://192.168.49.2:8441/api/v1/namespaces/default/events/functional-744288.186e72ac19058e88\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-744288.186e72ac19058e88  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-744288,UID:functional-744288,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-744288 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-744288,},FirstTimestamp:2025-10-14 19:36:27.828178568 +0000 UTC m=+0.685163688,LastTimestamp:2025-10-14 19:36:27.829543993 +0000 UTC m=+0.686529115,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,Reporting
Instance:functional-744288,}"
	Oct 14 19:46:37 functional-744288 kubelet[1809]: E1014 19:46:37.884599    1809 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-744288\" not found"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-744288 -n functional-744288
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-744288 -n functional-744288: exit status 2 (319.871944ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-744288" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/KubectlGetPods (2.21s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (2.2s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-744288 kubectl -- --context functional-744288 get pods
functional_test.go:731: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-744288 kubectl -- --context functional-744288 get pods: exit status 1 (104.418482ms)

                                                
                                                
** stderr ** 
	E1014 19:46:46.049472  442713 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1014 19:46:46.049839  442713 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1014 19:46:46.051393  442713 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1014 19:46:46.051775  442713 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1014 19:46:46.053226  442713 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:734: failed to get pods. args "out/minikube-linux-amd64 -p functional-744288 kubectl -- --context functional-744288 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-744288
helpers_test.go:243: (dbg) docker inspect functional-744288:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ca55a7aa3ac1a055c5c96669019f558795a313505e680734901ed4465642a23a",
	        "Created": "2025-10-14T19:32:11.700856501Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 432350,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-14T19:32:11.736879267Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/ca55a7aa3ac1a055c5c96669019f558795a313505e680734901ed4465642a23a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ca55a7aa3ac1a055c5c96669019f558795a313505e680734901ed4465642a23a/hostname",
	        "HostsPath": "/var/lib/docker/containers/ca55a7aa3ac1a055c5c96669019f558795a313505e680734901ed4465642a23a/hosts",
	        "LogPath": "/var/lib/docker/containers/ca55a7aa3ac1a055c5c96669019f558795a313505e680734901ed4465642a23a/ca55a7aa3ac1a055c5c96669019f558795a313505e680734901ed4465642a23a-json.log",
	        "Name": "/functional-744288",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-744288:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-744288",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ca55a7aa3ac1a055c5c96669019f558795a313505e680734901ed4465642a23a",
	                "LowerDir": "/var/lib/docker/overlay2/7474e2653fad8f96d0b2edb2947dba52f432e8f6324d88224718eafd377e5e8b-init/diff:/var/lib/docker/overlay2/51cca15559205c73df9571f03495ed8a29f085405673af5fef2c1ba7c695d8af/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7474e2653fad8f96d0b2edb2947dba52f432e8f6324d88224718eafd377e5e8b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7474e2653fad8f96d0b2edb2947dba52f432e8f6324d88224718eafd377e5e8b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7474e2653fad8f96d0b2edb2947dba52f432e8f6324d88224718eafd377e5e8b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-744288",
	                "Source": "/var/lib/docker/volumes/functional-744288/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-744288",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-744288",
	                "name.minikube.sigs.k8s.io": "functional-744288",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "697bcb3f8caffcc21274484944886a08e42751728789c447339536a0c178eab6",
	            "SandboxKey": "/var/run/docker/netns/697bcb3f8caf",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32898"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32899"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32902"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32900"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32901"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-744288": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b6:cd:04:2d:45:3c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a03a168f8787c5e4cc68b4fb2601645260a265eddce7ef101976f25cd32bdabb",
	                    "EndpointID": "c7c49faacc3b14ce4dd3f84ad70993167a6a8b1490739ae12d7ca1980d176ee7",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-744288",
	                        "ca55a7aa3ac1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-744288 -n functional-744288
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-744288 -n functional-744288: exit status 2 (305.478424ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctional/serial/MinikubeKubectlCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-744288 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-744288 logs -n 25: (1.005417224s)
helpers_test.go:260: TestFunctional/serial/MinikubeKubectlCmd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                     ARGS                                                      │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ pause   │ nospam-442016 --log_dir /tmp/nospam-442016 pause                                                              │ nospam-442016     │ jenkins │ v1.37.0 │ 14 Oct 25 19:31 UTC │ 14 Oct 25 19:31 UTC │
	│ unpause │ nospam-442016 --log_dir /tmp/nospam-442016 unpause                                                            │ nospam-442016     │ jenkins │ v1.37.0 │ 14 Oct 25 19:31 UTC │ 14 Oct 25 19:31 UTC │
	│ unpause │ nospam-442016 --log_dir /tmp/nospam-442016 unpause                                                            │ nospam-442016     │ jenkins │ v1.37.0 │ 14 Oct 25 19:31 UTC │ 14 Oct 25 19:31 UTC │
	│ unpause │ nospam-442016 --log_dir /tmp/nospam-442016 unpause                                                            │ nospam-442016     │ jenkins │ v1.37.0 │ 14 Oct 25 19:31 UTC │ 14 Oct 25 19:32 UTC │
	│ stop    │ nospam-442016 --log_dir /tmp/nospam-442016 stop                                                               │ nospam-442016     │ jenkins │ v1.37.0 │ 14 Oct 25 19:32 UTC │ 14 Oct 25 19:32 UTC │
	│ stop    │ nospam-442016 --log_dir /tmp/nospam-442016 stop                                                               │ nospam-442016     │ jenkins │ v1.37.0 │ 14 Oct 25 19:32 UTC │ 14 Oct 25 19:32 UTC │
	│ stop    │ nospam-442016 --log_dir /tmp/nospam-442016 stop                                                               │ nospam-442016     │ jenkins │ v1.37.0 │ 14 Oct 25 19:32 UTC │ 14 Oct 25 19:32 UTC │
	│ delete  │ -p nospam-442016                                                                                              │ nospam-442016     │ jenkins │ v1.37.0 │ 14 Oct 25 19:32 UTC │ 14 Oct 25 19:32 UTC │
	│ start   │ -p functional-744288 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:32 UTC │                     │
	│ start   │ -p functional-744288 --alsologtostderr -v=8                                                                   │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:40 UTC │                     │
	│ cache   │ functional-744288 cache add registry.k8s.io/pause:3.1                                                         │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:46 UTC │ 14 Oct 25 19:46 UTC │
	│ cache   │ functional-744288 cache add registry.k8s.io/pause:3.3                                                         │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:46 UTC │ 14 Oct 25 19:46 UTC │
	│ cache   │ functional-744288 cache add registry.k8s.io/pause:latest                                                      │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:46 UTC │ 14 Oct 25 19:46 UTC │
	│ cache   │ functional-744288 cache add minikube-local-cache-test:functional-744288                                       │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:46 UTC │ 14 Oct 25 19:46 UTC │
	│ cache   │ functional-744288 cache delete minikube-local-cache-test:functional-744288                                    │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:46 UTC │ 14 Oct 25 19:46 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                              │ minikube          │ jenkins │ v1.37.0 │ 14 Oct 25 19:46 UTC │ 14 Oct 25 19:46 UTC │
	│ cache   │ list                                                                                                          │ minikube          │ jenkins │ v1.37.0 │ 14 Oct 25 19:46 UTC │ 14 Oct 25 19:46 UTC │
	│ ssh     │ functional-744288 ssh sudo crictl images                                                                      │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:46 UTC │ 14 Oct 25 19:46 UTC │
	│ ssh     │ functional-744288 ssh sudo crictl rmi registry.k8s.io/pause:latest                                            │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:46 UTC │ 14 Oct 25 19:46 UTC │
	│ ssh     │ functional-744288 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                       │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:46 UTC │                     │
	│ cache   │ functional-744288 cache reload                                                                                │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:46 UTC │ 14 Oct 25 19:46 UTC │
	│ ssh     │ functional-744288 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                       │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:46 UTC │ 14 Oct 25 19:46 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                              │ minikube          │ jenkins │ v1.37.0 │ 14 Oct 25 19:46 UTC │ 14 Oct 25 19:46 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                           │ minikube          │ jenkins │ v1.37.0 │ 14 Oct 25 19:46 UTC │ 14 Oct 25 19:46 UTC │
	│ kubectl │ functional-744288 kubectl -- --context functional-744288 get pods                                             │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:46 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/14 19:40:29
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1014 19:40:29.999204  437269 out.go:360] Setting OutFile to fd 1 ...
	I1014 19:40:29.999451  437269 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 19:40:29.999459  437269 out.go:374] Setting ErrFile to fd 2...
	I1014 19:40:29.999463  437269 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 19:40:29.999664  437269 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-413763/.minikube/bin
	I1014 19:40:30.000162  437269 out.go:368] Setting JSON to false
	I1014 19:40:30.001140  437269 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":8576,"bootTime":1760462254,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1014 19:40:30.001253  437269 start.go:141] virtualization: kvm guest
	I1014 19:40:30.003929  437269 out.go:179] * [functional-744288] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1014 19:40:30.005394  437269 out.go:179]   - MINIKUBE_LOCATION=21409
	I1014 19:40:30.005413  437269 notify.go:220] Checking for updates...
	I1014 19:40:30.008578  437269 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 19:40:30.009922  437269 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-413763/kubeconfig
	I1014 19:40:30.011325  437269 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-413763/.minikube
	I1014 19:40:30.012721  437269 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1014 19:40:30.014074  437269 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 19:40:30.015738  437269 config.go:182] Loaded profile config "functional-744288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 19:40:30.015851  437269 driver.go:421] Setting default libvirt URI to qemu:///system
	I1014 19:40:30.041344  437269 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1014 19:40:30.041571  437269 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 19:40:30.106855  437269 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-14 19:40:30.095983875 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1014 19:40:30.106976  437269 docker.go:318] overlay module found
	I1014 19:40:30.108953  437269 out.go:179] * Using the docker driver based on existing profile
	I1014 19:40:30.110337  437269 start.go:305] selected driver: docker
	I1014 19:40:30.110363  437269 start.go:925] validating driver "docker" against &{Name:functional-744288 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-744288 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 19:40:30.110446  437269 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 19:40:30.110529  437269 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 19:40:30.176521  437269 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-14 19:40:30.165510899 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1014 19:40:30.177154  437269 cni.go:84] Creating CNI manager for ""
	I1014 19:40:30.177215  437269 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1014 19:40:30.177273  437269 start.go:349] cluster config:
	{Name:functional-744288 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-744288 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 19:40:30.179329  437269 out.go:179] * Starting "functional-744288" primary control-plane node in "functional-744288" cluster
	I1014 19:40:30.180795  437269 cache.go:123] Beginning downloading kic base image for docker with crio
	I1014 19:40:30.182356  437269 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1014 19:40:30.183701  437269 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 19:40:30.183742  437269 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-413763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1014 19:40:30.183752  437269 cache.go:58] Caching tarball of preloaded images
	I1014 19:40:30.183799  437269 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1014 19:40:30.183863  437269 preload.go:233] Found /home/jenkins/minikube-integration/21409-413763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1014 19:40:30.183877  437269 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1014 19:40:30.183979  437269 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/config.json ...
	I1014 19:40:30.204077  437269 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1014 19:40:30.204098  437269 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1014 19:40:30.204114  437269 cache.go:232] Successfully downloaded all kic artifacts
	I1014 19:40:30.204155  437269 start.go:360] acquireMachinesLock for functional-744288: {Name:mk27c3a9a4edec1c99a109c410361619ff35ec14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 19:40:30.204220  437269 start.go:364] duration metric: took 47.096µs to acquireMachinesLock for "functional-744288"
	I1014 19:40:30.204240  437269 start.go:96] Skipping create...Using existing machine configuration
	I1014 19:40:30.204245  437269 fix.go:54] fixHost starting: 
	I1014 19:40:30.204447  437269 cli_runner.go:164] Run: docker container inspect functional-744288 --format={{.State.Status}}
	I1014 19:40:30.222380  437269 fix.go:112] recreateIfNeeded on functional-744288: state=Running err=<nil>
	W1014 19:40:30.222430  437269 fix.go:138] unexpected machine state, will restart: <nil>
	I1014 19:40:30.224794  437269 out.go:252] * Updating the running docker "functional-744288" container ...
	I1014 19:40:30.224832  437269 machine.go:93] provisionDockerMachine start ...
	I1014 19:40:30.224915  437269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-744288
	I1014 19:40:30.243631  437269 main.go:141] libmachine: Using SSH client type: native
	I1014 19:40:30.243897  437269 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32898 <nil> <nil>}
	I1014 19:40:30.243914  437269 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 19:40:30.392088  437269 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-744288
	
	I1014 19:40:30.392121  437269 ubuntu.go:182] provisioning hostname "functional-744288"
	I1014 19:40:30.392200  437269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-744288
	I1014 19:40:30.410333  437269 main.go:141] libmachine: Using SSH client type: native
	I1014 19:40:30.410549  437269 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32898 <nil> <nil>}
	I1014 19:40:30.410563  437269 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-744288 && echo "functional-744288" | sudo tee /etc/hostname
	I1014 19:40:30.567306  437269 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-744288
	
	I1014 19:40:30.567398  437269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-744288
	I1014 19:40:30.585534  437269 main.go:141] libmachine: Using SSH client type: native
	I1014 19:40:30.585774  437269 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32898 <nil> <nil>}
	I1014 19:40:30.585794  437269 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-744288' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-744288/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-744288' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 19:40:30.733740  437269 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 19:40:30.733790  437269 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-413763/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-413763/.minikube}
	I1014 19:40:30.733813  437269 ubuntu.go:190] setting up certificates
	I1014 19:40:30.733825  437269 provision.go:84] configureAuth start
	I1014 19:40:30.733878  437269 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-744288
	I1014 19:40:30.751946  437269 provision.go:143] copyHostCerts
	I1014 19:40:30.751989  437269 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem
	I1014 19:40:30.752023  437269 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem, removing ...
	I1014 19:40:30.752048  437269 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem
	I1014 19:40:30.752133  437269 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem (1078 bytes)
	I1014 19:40:30.752237  437269 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem
	I1014 19:40:30.752267  437269 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem, removing ...
	I1014 19:40:30.752278  437269 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem
	I1014 19:40:30.752320  437269 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem (1123 bytes)
	I1014 19:40:30.752387  437269 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem
	I1014 19:40:30.752412  437269 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem, removing ...
	I1014 19:40:30.752422  437269 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem
	I1014 19:40:30.752463  437269 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem (1675 bytes)
	I1014 19:40:30.752709  437269 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca-key.pem org=jenkins.functional-744288 san=[127.0.0.1 192.168.49.2 functional-744288 localhost minikube]
	I1014 19:40:31.076864  437269 provision.go:177] copyRemoteCerts
	I1014 19:40:31.076930  437269 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 19:40:31.076971  437269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-744288
	I1014 19:40:31.095322  437269 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/functional-744288/id_rsa Username:docker}
	I1014 19:40:31.200396  437269 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1014 19:40:31.200473  437269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 19:40:31.218084  437269 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1014 19:40:31.218140  437269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1014 19:40:31.235905  437269 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1014 19:40:31.235974  437269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1014 19:40:31.253074  437269 provision.go:87] duration metric: took 519.232689ms to configureAuth
	I1014 19:40:31.253110  437269 ubuntu.go:206] setting minikube options for container-runtime
	I1014 19:40:31.253264  437269 config.go:182] Loaded profile config "functional-744288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 19:40:31.253357  437269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-744288
	I1014 19:40:31.271451  437269 main.go:141] libmachine: Using SSH client type: native
	I1014 19:40:31.271661  437269 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32898 <nil> <nil>}
	I1014 19:40:31.271677  437269 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 19:40:31.540521  437269 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 19:40:31.540549  437269 machine.go:96] duration metric: took 1.315709373s to provisionDockerMachine
	I1014 19:40:31.540561  437269 start.go:293] postStartSetup for "functional-744288" (driver="docker")
	I1014 19:40:31.540571  437269 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 19:40:31.540628  437269 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 19:40:31.540669  437269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-744288
	I1014 19:40:31.559297  437269 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/functional-744288/id_rsa Username:docker}
	I1014 19:40:31.665251  437269 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 19:40:31.669234  437269 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1014 19:40:31.669258  437269 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1014 19:40:31.669267  437269 command_runner.go:130] > VERSION_ID="12"
	I1014 19:40:31.669270  437269 command_runner.go:130] > VERSION="12 (bookworm)"
	I1014 19:40:31.669276  437269 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1014 19:40:31.669279  437269 command_runner.go:130] > ID=debian
	I1014 19:40:31.669283  437269 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1014 19:40:31.669288  437269 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1014 19:40:31.669293  437269 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1014 19:40:31.669341  437269 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1014 19:40:31.669359  437269 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1014 19:40:31.669371  437269 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-413763/.minikube/addons for local assets ...
	I1014 19:40:31.669425  437269 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-413763/.minikube/files for local assets ...
	I1014 19:40:31.669510  437269 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem -> 4173732.pem in /etc/ssl/certs
	I1014 19:40:31.669525  437269 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem -> /etc/ssl/certs/4173732.pem
	I1014 19:40:31.669592  437269 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/test/nested/copy/417373/hosts -> hosts in /etc/test/nested/copy/417373
	I1014 19:40:31.669600  437269 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/test/nested/copy/417373/hosts -> /etc/test/nested/copy/417373/hosts
	I1014 19:40:31.669633  437269 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/417373
	I1014 19:40:31.677988  437269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem --> /etc/ssl/certs/4173732.pem (1708 bytes)
	I1014 19:40:31.696543  437269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/test/nested/copy/417373/hosts --> /etc/test/nested/copy/417373/hosts (40 bytes)
	I1014 19:40:31.715275  437269 start.go:296] duration metric: took 174.687158ms for postStartSetup
	I1014 19:40:31.715383  437269 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1014 19:40:31.715428  437269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-744288
	I1014 19:40:31.734376  437269 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/functional-744288/id_rsa Username:docker}
	I1014 19:40:31.836456  437269 command_runner.go:130] > 39%
	I1014 19:40:31.836544  437269 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1014 19:40:31.841513  437269 command_runner.go:130] > 178G
	I1014 19:40:31.841552  437269 fix.go:56] duration metric: took 1.637302821s for fixHost
	I1014 19:40:31.841566  437269 start.go:83] releasing machines lock for "functional-744288", held for 1.637335022s
	I1014 19:40:31.841633  437269 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-744288
	I1014 19:40:31.859002  437269 ssh_runner.go:195] Run: cat /version.json
	I1014 19:40:31.859036  437269 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 19:40:31.859053  437269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-744288
	I1014 19:40:31.859093  437269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-744288
	I1014 19:40:31.877314  437269 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/functional-744288/id_rsa Username:docker}
	I1014 19:40:31.877547  437269 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/functional-744288/id_rsa Username:docker}
	I1014 19:40:31.978415  437269 command_runner.go:130] > {"iso_version": "v1.37.0-1758198818-20370", "kicbase_version": "v0.0.48-1759745255-21703", "minikube_version": "v1.37.0", "commit": "a51fe4b7ffc88febd8814e8831f38772e976d097"}
	I1014 19:40:31.978583  437269 ssh_runner.go:195] Run: systemctl --version
	I1014 19:40:32.030433  437269 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1014 19:40:32.032548  437269 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1014 19:40:32.032581  437269 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1014 19:40:32.032653  437269 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 19:40:32.071124  437269 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1014 19:40:32.075797  437269 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1014 19:40:32.076143  437269 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 19:40:32.076213  437269 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 19:40:32.084774  437269 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1014 19:40:32.084802  437269 start.go:495] detecting cgroup driver to use...
	I1014 19:40:32.084841  437269 detect.go:190] detected "systemd" cgroup driver on host os
	I1014 19:40:32.084885  437269 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 19:40:32.100807  437269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 19:40:32.114918  437269 docker.go:218] disabling cri-docker service (if available) ...
	I1014 19:40:32.115001  437269 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 19:40:32.131082  437269 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 19:40:32.145731  437269 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 19:40:32.234963  437269 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 19:40:32.329593  437269 docker.go:234] disabling docker service ...
	I1014 19:40:32.329671  437269 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 19:40:32.344729  437269 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 19:40:32.357712  437269 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 19:40:32.445038  437269 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 19:40:32.534134  437269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 19:40:32.547615  437269 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 19:40:32.562780  437269 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1014 19:40:32.562835  437269 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1014 19:40:32.562884  437269 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 19:40:32.572580  437269 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1014 19:40:32.572655  437269 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 19:40:32.581715  437269 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 19:40:32.590624  437269 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 19:40:32.599492  437269 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 19:40:32.607979  437269 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 19:40:32.617026  437269 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 19:40:32.625607  437269 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 19:40:32.634661  437269 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 19:40:32.642022  437269 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1014 19:40:32.642101  437269 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 19:40:32.649948  437269 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 19:40:32.737827  437269 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 19:40:32.854779  437269 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 19:40:32.854851  437269 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 19:40:32.859353  437269 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1014 19:40:32.859376  437269 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1014 19:40:32.859382  437269 command_runner.go:130] > Device: 0,59	Inode: 3887        Links: 1
	I1014 19:40:32.859389  437269 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1014 19:40:32.859394  437269 command_runner.go:130] > Access: 2025-10-14 19:40:32.837516724 +0000
	I1014 19:40:32.859399  437269 command_runner.go:130] > Modify: 2025-10-14 19:40:32.837516724 +0000
	I1014 19:40:32.859403  437269 command_runner.go:130] > Change: 2025-10-14 19:40:32.837516724 +0000
	I1014 19:40:32.859408  437269 command_runner.go:130] >  Birth: 2025-10-14 19:40:32.837516724 +0000
	I1014 19:40:32.859438  437269 start.go:563] Will wait 60s for crictl version
	I1014 19:40:32.859485  437269 ssh_runner.go:195] Run: which crictl
	I1014 19:40:32.863222  437269 command_runner.go:130] > /usr/local/bin/crictl
	I1014 19:40:32.863312  437269 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1014 19:40:32.889462  437269 command_runner.go:130] > Version:  0.1.0
	I1014 19:40:32.889482  437269 command_runner.go:130] > RuntimeName:  cri-o
	I1014 19:40:32.889486  437269 command_runner.go:130] > RuntimeVersion:  1.34.1
	I1014 19:40:32.889490  437269 command_runner.go:130] > RuntimeApiVersion:  v1
	I1014 19:40:32.889505  437269 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1014 19:40:32.889559  437269 ssh_runner.go:195] Run: crio --version
	I1014 19:40:32.920224  437269 command_runner.go:130] > crio version 1.34.1
	I1014 19:40:32.920251  437269 command_runner.go:130] >    GitCommit:      8e14bff4153ba033f12ed3ffa3cadaca5425b313
	I1014 19:40:32.920258  437269 command_runner.go:130] >    GitCommitDate:  2025-10-01T13:04:13Z
	I1014 19:40:32.920266  437269 command_runner.go:130] >    GitTreeState:   dirty
	I1014 19:40:32.920279  437269 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1014 19:40:32.920285  437269 command_runner.go:130] >    GoVersion:      go1.24.6
	I1014 19:40:32.920291  437269 command_runner.go:130] >    Compiler:       gc
	I1014 19:40:32.920303  437269 command_runner.go:130] >    Platform:       linux/amd64
	I1014 19:40:32.920312  437269 command_runner.go:130] >    Linkmode:       static
	I1014 19:40:32.920322  437269 command_runner.go:130] >    BuildTags:
	I1014 19:40:32.920332  437269 command_runner.go:130] >      static
	I1014 19:40:32.920340  437269 command_runner.go:130] >      netgo
	I1014 19:40:32.920347  437269 command_runner.go:130] >      osusergo
	I1014 19:40:32.920354  437269 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1014 19:40:32.920358  437269 command_runner.go:130] >      seccomp
	I1014 19:40:32.920361  437269 command_runner.go:130] >      apparmor
	I1014 19:40:32.920367  437269 command_runner.go:130] >      selinux
	I1014 19:40:32.920371  437269 command_runner.go:130] >    LDFlags:          unknown
	I1014 19:40:32.920379  437269 command_runner.go:130] >    SeccompEnabled:   true
	I1014 19:40:32.920383  437269 command_runner.go:130] >    AppArmorEnabled:  false
	I1014 19:40:32.920453  437269 ssh_runner.go:195] Run: crio --version
	I1014 19:40:32.949467  437269 command_runner.go:130] > crio version 1.34.1
	I1014 19:40:32.949490  437269 command_runner.go:130] >    GitCommit:      8e14bff4153ba033f12ed3ffa3cadaca5425b313
	I1014 19:40:32.949495  437269 command_runner.go:130] >    GitCommitDate:  2025-10-01T13:04:13Z
	I1014 19:40:32.949499  437269 command_runner.go:130] >    GitTreeState:   dirty
	I1014 19:40:32.949504  437269 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1014 19:40:32.949508  437269 command_runner.go:130] >    GoVersion:      go1.24.6
	I1014 19:40:32.949514  437269 command_runner.go:130] >    Compiler:       gc
	I1014 19:40:32.949525  437269 command_runner.go:130] >    Platform:       linux/amd64
	I1014 19:40:32.949534  437269 command_runner.go:130] >    Linkmode:       static
	I1014 19:40:32.949540  437269 command_runner.go:130] >    BuildTags:
	I1014 19:40:32.949546  437269 command_runner.go:130] >      static
	I1014 19:40:32.949555  437269 command_runner.go:130] >      netgo
	I1014 19:40:32.949560  437269 command_runner.go:130] >      osusergo
	I1014 19:40:32.949567  437269 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1014 19:40:32.949571  437269 command_runner.go:130] >      seccomp
	I1014 19:40:32.949576  437269 command_runner.go:130] >      apparmor
	I1014 19:40:32.949582  437269 command_runner.go:130] >      selinux
	I1014 19:40:32.949588  437269 command_runner.go:130] >    LDFlags:          unknown
	I1014 19:40:32.949592  437269 command_runner.go:130] >    SeccompEnabled:   true
	I1014 19:40:32.949599  437269 command_runner.go:130] >    AppArmorEnabled:  false
	I1014 19:40:32.952722  437269 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1014 19:40:32.953989  437269 cli_runner.go:164] Run: docker network inspect functional-744288 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1014 19:40:32.971672  437269 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1014 19:40:32.976098  437269 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1014 19:40:32.976178  437269 kubeadm.go:883] updating cluster {Name:functional-744288 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-744288 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 19:40:32.976267  437269 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 19:40:32.976332  437269 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 19:40:33.006155  437269 command_runner.go:130] > {
	I1014 19:40:33.006181  437269 command_runner.go:130] >   "images":  [
	I1014 19:40:33.006186  437269 command_runner.go:130] >     {
	I1014 19:40:33.006194  437269 command_runner.go:130] >       "id":  "409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c",
	I1014 19:40:33.006200  437269 command_runner.go:130] >       "repoTags":  [
	I1014 19:40:33.006209  437269 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1014 19:40:33.006213  437269 command_runner.go:130] >       ],
	I1014 19:40:33.006218  437269 command_runner.go:130] >       "repoDigests":  [
	I1014 19:40:33.006232  437269 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1014 19:40:33.006248  437269 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"
	I1014 19:40:33.006257  437269 command_runner.go:130] >       ],
	I1014 19:40:33.006270  437269 command_runner.go:130] >       "size":  "109379124",
	I1014 19:40:33.006276  437269 command_runner.go:130] >       "username":  "",
	I1014 19:40:33.006281  437269 command_runner.go:130] >       "pinned":  false
	I1014 19:40:33.006287  437269 command_runner.go:130] >     },
	I1014 19:40:33.006290  437269 command_runner.go:130] >     {
	I1014 19:40:33.006304  437269 command_runner.go:130] >       "id":  "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1014 19:40:33.006316  437269 command_runner.go:130] >       "repoTags":  [
	I1014 19:40:33.006324  437269 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1014 19:40:33.006330  437269 command_runner.go:130] >       ],
	I1014 19:40:33.006335  437269 command_runner.go:130] >       "repoDigests":  [
	I1014 19:40:33.006348  437269 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1014 19:40:33.006364  437269 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1014 19:40:33.006372  437269 command_runner.go:130] >       ],
	I1014 19:40:33.006379  437269 command_runner.go:130] >       "size":  "31470524",
	I1014 19:40:33.006388  437269 command_runner.go:130] >       "username":  "",
	I1014 19:40:33.006398  437269 command_runner.go:130] >       "pinned":  false
	I1014 19:40:33.006402  437269 command_runner.go:130] >     },
	I1014 19:40:33.006405  437269 command_runner.go:130] >     {
	I1014 19:40:33.006413  437269 command_runner.go:130] >       "id":  "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969",
	I1014 19:40:33.006422  437269 command_runner.go:130] >       "repoTags":  [
	I1014 19:40:33.006431  437269 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.12.1"
	I1014 19:40:33.006441  437269 command_runner.go:130] >       ],
	I1014 19:40:33.006448  437269 command_runner.go:130] >       "repoDigests":  [
	I1014 19:40:33.006463  437269 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998",
	I1014 19:40:33.006477  437269 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"
	I1014 19:40:33.006486  437269 command_runner.go:130] >       ],
	I1014 19:40:33.006496  437269 command_runner.go:130] >       "size":  "76103547",
	I1014 19:40:33.006505  437269 command_runner.go:130] >       "username":  "nonroot",
	I1014 19:40:33.006513  437269 command_runner.go:130] >       "pinned":  false
	I1014 19:40:33.006516  437269 command_runner.go:130] >     },
	I1014 19:40:33.006525  437269 command_runner.go:130] >     {
	I1014 19:40:33.006535  437269 command_runner.go:130] >       "id":  "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115",
	I1014 19:40:33.006545  437269 command_runner.go:130] >       "repoTags":  [
	I1014 19:40:33.006555  437269 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.4-0"
	I1014 19:40:33.006563  437269 command_runner.go:130] >       ],
	I1014 19:40:33.006570  437269 command_runner.go:130] >       "repoDigests":  [
	I1014 19:40:33.006584  437269 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f",
	I1014 19:40:33.006598  437269 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"
	I1014 19:40:33.006607  437269 command_runner.go:130] >       ],
	I1014 19:40:33.006615  437269 command_runner.go:130] >       "size":  "195976448",
	I1014 19:40:33.006619  437269 command_runner.go:130] >       "uid":  {
	I1014 19:40:33.006624  437269 command_runner.go:130] >         "value":  "0"
	I1014 19:40:33.006632  437269 command_runner.go:130] >       },
	I1014 19:40:33.006646  437269 command_runner.go:130] >       "username":  "",
	I1014 19:40:33.006657  437269 command_runner.go:130] >       "pinned":  false
	I1014 19:40:33.006667  437269 command_runner.go:130] >     },
	I1014 19:40:33.006675  437269 command_runner.go:130] >     {
	I1014 19:40:33.006689  437269 command_runner.go:130] >       "id":  "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97",
	I1014 19:40:33.006695  437269 command_runner.go:130] >       "repoTags":  [
	I1014 19:40:33.006707  437269 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.34.1"
	I1014 19:40:33.006714  437269 command_runner.go:130] >       ],
	I1014 19:40:33.006718  437269 command_runner.go:130] >       "repoDigests":  [
	I1014 19:40:33.006732  437269 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964",
	I1014 19:40:33.006748  437269 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"
	I1014 19:40:33.006767  437269 command_runner.go:130] >       ],
	I1014 19:40:33.006778  437269 command_runner.go:130] >       "size":  "89046001",
	I1014 19:40:33.006786  437269 command_runner.go:130] >       "uid":  {
	I1014 19:40:33.006795  437269 command_runner.go:130] >         "value":  "0"
	I1014 19:40:33.006803  437269 command_runner.go:130] >       },
	I1014 19:40:33.006809  437269 command_runner.go:130] >       "username":  "",
	I1014 19:40:33.006819  437269 command_runner.go:130] >       "pinned":  false
	I1014 19:40:33.006827  437269 command_runner.go:130] >     },
	I1014 19:40:33.006835  437269 command_runner.go:130] >     {
	I1014 19:40:33.006846  437269 command_runner.go:130] >       "id":  "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f",
	I1014 19:40:33.006855  437269 command_runner.go:130] >       "repoTags":  [
	I1014 19:40:33.006865  437269 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.34.1"
	I1014 19:40:33.006874  437269 command_runner.go:130] >       ],
	I1014 19:40:33.006884  437269 command_runner.go:130] >       "repoDigests":  [
	I1014 19:40:33.006899  437269 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89",
	I1014 19:40:33.006910  437269 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"
	I1014 19:40:33.006918  437269 command_runner.go:130] >       ],
	I1014 19:40:33.006926  437269 command_runner.go:130] >       "size":  "76004181",
	I1014 19:40:33.006935  437269 command_runner.go:130] >       "uid":  {
	I1014 19:40:33.006948  437269 command_runner.go:130] >         "value":  "0"
	I1014 19:40:33.006957  437269 command_runner.go:130] >       },
	I1014 19:40:33.006967  437269 command_runner.go:130] >       "username":  "",
	I1014 19:40:33.006976  437269 command_runner.go:130] >       "pinned":  false
	I1014 19:40:33.006985  437269 command_runner.go:130] >     },
	I1014 19:40:33.006993  437269 command_runner.go:130] >     {
	I1014 19:40:33.007004  437269 command_runner.go:130] >       "id":  "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7",
	I1014 19:40:33.007011  437269 command_runner.go:130] >       "repoTags":  [
	I1014 19:40:33.007019  437269 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.34.1"
	I1014 19:40:33.007027  437269 command_runner.go:130] >       ],
	I1014 19:40:33.007037  437269 command_runner.go:130] >       "repoDigests":  [
	I1014 19:40:33.007052  437269 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a",
	I1014 19:40:33.007067  437269 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"
	I1014 19:40:33.007076  437269 command_runner.go:130] >       ],
	I1014 19:40:33.007084  437269 command_runner.go:130] >       "size":  "73138073",
	I1014 19:40:33.007092  437269 command_runner.go:130] >       "username":  "",
	I1014 19:40:33.007095  437269 command_runner.go:130] >       "pinned":  false
	I1014 19:40:33.007103  437269 command_runner.go:130] >     },
	I1014 19:40:33.007109  437269 command_runner.go:130] >     {
	I1014 19:40:33.007123  437269 command_runner.go:130] >       "id":  "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813",
	I1014 19:40:33.007132  437269 command_runner.go:130] >       "repoTags":  [
	I1014 19:40:33.007142  437269 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.34.1"
	I1014 19:40:33.007152  437269 command_runner.go:130] >       ],
	I1014 19:40:33.007162  437269 command_runner.go:130] >       "repoDigests":  [
	I1014 19:40:33.007175  437269 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31",
	I1014 19:40:33.007194  437269 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"
	I1014 19:40:33.007203  437269 command_runner.go:130] >       ],
	I1014 19:40:33.007213  437269 command_runner.go:130] >       "size":  "53844823",
	I1014 19:40:33.007220  437269 command_runner.go:130] >       "uid":  {
	I1014 19:40:33.007229  437269 command_runner.go:130] >         "value":  "0"
	I1014 19:40:33.007237  437269 command_runner.go:130] >       },
	I1014 19:40:33.007246  437269 command_runner.go:130] >       "username":  "",
	I1014 19:40:33.007253  437269 command_runner.go:130] >       "pinned":  false
	I1014 19:40:33.007260  437269 command_runner.go:130] >     },
	I1014 19:40:33.007266  437269 command_runner.go:130] >     {
	I1014 19:40:33.007278  437269 command_runner.go:130] >       "id":  "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f",
	I1014 19:40:33.007285  437269 command_runner.go:130] >       "repoTags":  [
	I1014 19:40:33.007290  437269 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1014 19:40:33.007298  437269 command_runner.go:130] >       ],
	I1014 19:40:33.007308  437269 command_runner.go:130] >       "repoDigests":  [
	I1014 19:40:33.007320  437269 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1014 19:40:33.007334  437269 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"
	I1014 19:40:33.007342  437269 command_runner.go:130] >       ],
	I1014 19:40:33.007351  437269 command_runner.go:130] >       "size":  "742092",
	I1014 19:40:33.007359  437269 command_runner.go:130] >       "uid":  {
	I1014 19:40:33.007370  437269 command_runner.go:130] >         "value":  "65535"
	I1014 19:40:33.007376  437269 command_runner.go:130] >       },
	I1014 19:40:33.007380  437269 command_runner.go:130] >       "username":  "",
	I1014 19:40:33.007387  437269 command_runner.go:130] >       "pinned":  true
	I1014 19:40:33.007393  437269 command_runner.go:130] >     }
	I1014 19:40:33.007401  437269 command_runner.go:130] >   ]
	I1014 19:40:33.007406  437269 command_runner.go:130] > }
	I1014 19:40:33.007590  437269 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 19:40:33.007603  437269 crio.go:433] Images already preloaded, skipping extraction
	I1014 19:40:33.007661  437269 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 19:40:33.032442  437269 command_runner.go:130] > {
	I1014 19:40:33.032462  437269 command_runner.go:130] >   "images":  [
	I1014 19:40:33.032466  437269 command_runner.go:130] >     {
	I1014 19:40:33.032478  437269 command_runner.go:130] >       "id":  "409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c",
	I1014 19:40:33.032485  437269 command_runner.go:130] >       "repoTags":  [
	I1014 19:40:33.032495  437269 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1014 19:40:33.032501  437269 command_runner.go:130] >       ],
	I1014 19:40:33.032508  437269 command_runner.go:130] >       "repoDigests":  [
	I1014 19:40:33.032519  437269 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1014 19:40:33.032527  437269 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"
	I1014 19:40:33.032534  437269 command_runner.go:130] >       ],
	I1014 19:40:33.032538  437269 command_runner.go:130] >       "size":  "109379124",
	I1014 19:40:33.032542  437269 command_runner.go:130] >       "username":  "",
	I1014 19:40:33.032548  437269 command_runner.go:130] >       "pinned":  false
	I1014 19:40:33.032551  437269 command_runner.go:130] >     },
	I1014 19:40:33.032555  437269 command_runner.go:130] >     {
	I1014 19:40:33.032561  437269 command_runner.go:130] >       "id":  "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1014 19:40:33.032567  437269 command_runner.go:130] >       "repoTags":  [
	I1014 19:40:33.032572  437269 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1014 19:40:33.032575  437269 command_runner.go:130] >       ],
	I1014 19:40:33.032582  437269 command_runner.go:130] >       "repoDigests":  [
	I1014 19:40:33.032591  437269 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1014 19:40:33.032602  437269 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1014 19:40:33.032608  437269 command_runner.go:130] >       ],
	I1014 19:40:33.032612  437269 command_runner.go:130] >       "size":  "31470524",
	I1014 19:40:33.032616  437269 command_runner.go:130] >       "username":  "",
	I1014 19:40:33.032621  437269 command_runner.go:130] >       "pinned":  false
	I1014 19:40:33.032626  437269 command_runner.go:130] >     },
	I1014 19:40:33.032629  437269 command_runner.go:130] >     {
	I1014 19:40:33.032635  437269 command_runner.go:130] >       "id":  "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969",
	I1014 19:40:33.032642  437269 command_runner.go:130] >       "repoTags":  [
	I1014 19:40:33.032647  437269 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.12.1"
	I1014 19:40:33.032652  437269 command_runner.go:130] >       ],
	I1014 19:40:33.032656  437269 command_runner.go:130] >       "repoDigests":  [
	I1014 19:40:33.032665  437269 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998",
	I1014 19:40:33.032675  437269 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"
	I1014 19:40:33.032682  437269 command_runner.go:130] >       ],
	I1014 19:40:33.032686  437269 command_runner.go:130] >       "size":  "76103547",
	I1014 19:40:33.032690  437269 command_runner.go:130] >       "username":  "nonroot",
	I1014 19:40:33.032694  437269 command_runner.go:130] >       "pinned":  false
	I1014 19:40:33.032697  437269 command_runner.go:130] >     },
	I1014 19:40:33.032700  437269 command_runner.go:130] >     {
	I1014 19:40:33.032705  437269 command_runner.go:130] >       "id":  "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115",
	I1014 19:40:33.032709  437269 command_runner.go:130] >       "repoTags":  [
	I1014 19:40:33.032714  437269 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.4-0"
	I1014 19:40:33.032720  437269 command_runner.go:130] >       ],
	I1014 19:40:33.032724  437269 command_runner.go:130] >       "repoDigests":  [
	I1014 19:40:33.032730  437269 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f",
	I1014 19:40:33.032739  437269 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"
	I1014 19:40:33.032743  437269 command_runner.go:130] >       ],
	I1014 19:40:33.032749  437269 command_runner.go:130] >       "size":  "195976448",
	I1014 19:40:33.032772  437269 command_runner.go:130] >       "uid":  {
	I1014 19:40:33.032781  437269 command_runner.go:130] >         "value":  "0"
	I1014 19:40:33.032786  437269 command_runner.go:130] >       },
	I1014 19:40:33.032793  437269 command_runner.go:130] >       "username":  "",
	I1014 19:40:33.032798  437269 command_runner.go:130] >       "pinned":  false
	I1014 19:40:33.032801  437269 command_runner.go:130] >     },
	I1014 19:40:33.032804  437269 command_runner.go:130] >     {
	I1014 19:40:33.032810  437269 command_runner.go:130] >       "id":  "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97",
	I1014 19:40:33.032816  437269 command_runner.go:130] >       "repoTags":  [
	I1014 19:40:33.032821  437269 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.34.1"
	I1014 19:40:33.032827  437269 command_runner.go:130] >       ],
	I1014 19:40:33.032830  437269 command_runner.go:130] >       "repoDigests":  [
	I1014 19:40:33.032837  437269 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964",
	I1014 19:40:33.032847  437269 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"
	I1014 19:40:33.032850  437269 command_runner.go:130] >       ],
	I1014 19:40:33.032858  437269 command_runner.go:130] >       "size":  "89046001",
	I1014 19:40:33.032862  437269 command_runner.go:130] >       "uid":  {
	I1014 19:40:33.032866  437269 command_runner.go:130] >         "value":  "0"
	I1014 19:40:33.032869  437269 command_runner.go:130] >       },
	I1014 19:40:33.032873  437269 command_runner.go:130] >       "username":  "",
	I1014 19:40:33.032877  437269 command_runner.go:130] >       "pinned":  false
	I1014 19:40:33.032880  437269 command_runner.go:130] >     },
	I1014 19:40:33.032883  437269 command_runner.go:130] >     {
	I1014 19:40:33.032889  437269 command_runner.go:130] >       "id":  "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f",
	I1014 19:40:33.032895  437269 command_runner.go:130] >       "repoTags":  [
	I1014 19:40:33.032901  437269 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.34.1"
	I1014 19:40:33.032906  437269 command_runner.go:130] >       ],
	I1014 19:40:33.032910  437269 command_runner.go:130] >       "repoDigests":  [
	I1014 19:40:33.032917  437269 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89",
	I1014 19:40:33.032935  437269 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"
	I1014 19:40:33.032940  437269 command_runner.go:130] >       ],
	I1014 19:40:33.032944  437269 command_runner.go:130] >       "size":  "76004181",
	I1014 19:40:33.032948  437269 command_runner.go:130] >       "uid":  {
	I1014 19:40:33.032955  437269 command_runner.go:130] >         "value":  "0"
	I1014 19:40:33.032958  437269 command_runner.go:130] >       },
	I1014 19:40:33.032963  437269 command_runner.go:130] >       "username":  "",
	I1014 19:40:33.032969  437269 command_runner.go:130] >       "pinned":  false
	I1014 19:40:33.032973  437269 command_runner.go:130] >     },
	I1014 19:40:33.032976  437269 command_runner.go:130] >     {
	I1014 19:40:33.032981  437269 command_runner.go:130] >       "id":  "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7",
	I1014 19:40:33.032986  437269 command_runner.go:130] >       "repoTags":  [
	I1014 19:40:33.032990  437269 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.34.1"
	I1014 19:40:33.032996  437269 command_runner.go:130] >       ],
	I1014 19:40:33.033000  437269 command_runner.go:130] >       "repoDigests":  [
	I1014 19:40:33.033009  437269 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a",
	I1014 19:40:33.033018  437269 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"
	I1014 19:40:33.033023  437269 command_runner.go:130] >       ],
	I1014 19:40:33.033027  437269 command_runner.go:130] >       "size":  "73138073",
	I1014 19:40:33.033033  437269 command_runner.go:130] >       "username":  "",
	I1014 19:40:33.033037  437269 command_runner.go:130] >       "pinned":  false
	I1014 19:40:33.033042  437269 command_runner.go:130] >     },
	I1014 19:40:33.033045  437269 command_runner.go:130] >     {
	I1014 19:40:33.033051  437269 command_runner.go:130] >       "id":  "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813",
	I1014 19:40:33.033055  437269 command_runner.go:130] >       "repoTags":  [
	I1014 19:40:33.033059  437269 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.34.1"
	I1014 19:40:33.033062  437269 command_runner.go:130] >       ],
	I1014 19:40:33.033066  437269 command_runner.go:130] >       "repoDigests":  [
	I1014 19:40:33.033073  437269 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31",
	I1014 19:40:33.033115  437269 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"
	I1014 19:40:33.033125  437269 command_runner.go:130] >       ],
	I1014 19:40:33.033129  437269 command_runner.go:130] >       "size":  "53844823",
	I1014 19:40:33.033133  437269 command_runner.go:130] >       "uid":  {
	I1014 19:40:33.033139  437269 command_runner.go:130] >         "value":  "0"
	I1014 19:40:33.033142  437269 command_runner.go:130] >       },
	I1014 19:40:33.033146  437269 command_runner.go:130] >       "username":  "",
	I1014 19:40:33.033150  437269 command_runner.go:130] >       "pinned":  false
	I1014 19:40:33.033153  437269 command_runner.go:130] >     },
	I1014 19:40:33.033157  437269 command_runner.go:130] >     {
	I1014 19:40:33.033166  437269 command_runner.go:130] >       "id":  "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f",
	I1014 19:40:33.033170  437269 command_runner.go:130] >       "repoTags":  [
	I1014 19:40:33.033175  437269 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1014 19:40:33.033180  437269 command_runner.go:130] >       ],
	I1014 19:40:33.033184  437269 command_runner.go:130] >       "repoDigests":  [
	I1014 19:40:33.033194  437269 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1014 19:40:33.033201  437269 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"
	I1014 19:40:33.033207  437269 command_runner.go:130] >       ],
	I1014 19:40:33.033210  437269 command_runner.go:130] >       "size":  "742092",
	I1014 19:40:33.033214  437269 command_runner.go:130] >       "uid":  {
	I1014 19:40:33.033217  437269 command_runner.go:130] >         "value":  "65535"
	I1014 19:40:33.033221  437269 command_runner.go:130] >       },
	I1014 19:40:33.033227  437269 command_runner.go:130] >       "username":  "",
	I1014 19:40:33.033231  437269 command_runner.go:130] >       "pinned":  true
	I1014 19:40:33.033234  437269 command_runner.go:130] >     }
	I1014 19:40:33.033237  437269 command_runner.go:130] >   ]
	I1014 19:40:33.033243  437269 command_runner.go:130] > }
	I1014 19:40:33.033339  437269 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 19:40:33.033350  437269 cache_images.go:85] Images are preloaded, skipping loading
	I1014 19:40:33.033357  437269 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1014 19:40:33.033466  437269 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-744288 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-744288 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 19:40:33.033525  437269 ssh_runner.go:195] Run: crio config
	I1014 19:40:33.060289  437269 command_runner.go:130] ! time="2025-10-14T19:40:33.059904069Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1014 19:40:33.060322  437269 command_runner.go:130] ! time="2025-10-14T19:40:33.059934761Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1014 19:40:33.060333  437269 command_runner.go:130] ! time="2025-10-14T19:40:33.05995717Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1014 19:40:33.060344  437269 command_runner.go:130] ! time="2025-10-14T19:40:33.059977069Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1014 19:40:33.060356  437269 command_runner.go:130] ! time="2025-10-14T19:40:33.060036887Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1014 19:40:33.060415  437269 command_runner.go:130] ! time="2025-10-14T19:40:33.060204237Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1014 19:40:33.072518  437269 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1014 19:40:33.078451  437269 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1014 19:40:33.078471  437269 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1014 19:40:33.078478  437269 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1014 19:40:33.078485  437269 command_runner.go:130] > #
	I1014 19:40:33.078491  437269 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1014 19:40:33.078497  437269 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1014 19:40:33.078504  437269 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1014 19:40:33.078513  437269 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1014 19:40:33.078518  437269 command_runner.go:130] > # reload'.
	I1014 19:40:33.078524  437269 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1014 19:40:33.078533  437269 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1014 19:40:33.078539  437269 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1014 19:40:33.078545  437269 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1014 19:40:33.078551  437269 command_runner.go:130] > [crio]
	I1014 19:40:33.078557  437269 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1014 19:40:33.078564  437269 command_runner.go:130] > # containers images, in this directory.
	I1014 19:40:33.078572  437269 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1014 19:40:33.078580  437269 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1014 19:40:33.078585  437269 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1014 19:40:33.078594  437269 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1014 19:40:33.078601  437269 command_runner.go:130] > # imagestore = ""
	I1014 19:40:33.078607  437269 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1014 19:40:33.078615  437269 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1014 19:40:33.078620  437269 command_runner.go:130] > # storage_driver = "overlay"
	I1014 19:40:33.078625  437269 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1014 19:40:33.078633  437269 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1014 19:40:33.078637  437269 command_runner.go:130] > # storage_option = [
	I1014 19:40:33.078642  437269 command_runner.go:130] > # ]
	I1014 19:40:33.078648  437269 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1014 19:40:33.078656  437269 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1014 19:40:33.078660  437269 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1014 19:40:33.078667  437269 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1014 19:40:33.078673  437269 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1014 19:40:33.078690  437269 command_runner.go:130] > # always happen on a node reboot
	I1014 19:40:33.078695  437269 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1014 19:40:33.078703  437269 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1014 19:40:33.078709  437269 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1014 19:40:33.078716  437269 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1014 19:40:33.078720  437269 command_runner.go:130] > # version_file_persist = ""
	I1014 19:40:33.078729  437269 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1014 19:40:33.078739  437269 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1014 19:40:33.078745  437269 command_runner.go:130] > # internal_wipe = true
	I1014 19:40:33.078771  437269 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1014 19:40:33.078784  437269 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1014 19:40:33.078790  437269 command_runner.go:130] > # internal_repair = true
	I1014 19:40:33.078798  437269 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1014 19:40:33.078804  437269 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1014 19:40:33.078816  437269 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1014 19:40:33.078823  437269 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1014 19:40:33.078829  437269 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1014 19:40:33.078834  437269 command_runner.go:130] > [crio.api]
	I1014 19:40:33.078839  437269 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1014 19:40:33.078846  437269 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1014 19:40:33.078851  437269 command_runner.go:130] > # IP address on which the stream server will listen.
	I1014 19:40:33.078858  437269 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1014 19:40:33.078864  437269 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1014 19:40:33.078871  437269 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1014 19:40:33.078875  437269 command_runner.go:130] > # stream_port = "0"
	I1014 19:40:33.078881  437269 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1014 19:40:33.078885  437269 command_runner.go:130] > # stream_enable_tls = false
	I1014 19:40:33.078893  437269 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1014 19:40:33.078897  437269 command_runner.go:130] > # stream_idle_timeout = ""
	I1014 19:40:33.078904  437269 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1014 19:40:33.078912  437269 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1014 19:40:33.078916  437269 command_runner.go:130] > # stream_tls_cert = ""
	I1014 19:40:33.078924  437269 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1014 19:40:33.078931  437269 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1014 19:40:33.078936  437269 command_runner.go:130] > # stream_tls_key = ""
	I1014 19:40:33.078941  437269 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1014 19:40:33.078949  437269 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1014 19:40:33.078954  437269 command_runner.go:130] > # automatically pick up the changes.
	I1014 19:40:33.078960  437269 command_runner.go:130] > # stream_tls_ca = ""
	I1014 19:40:33.078977  437269 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1014 19:40:33.078984  437269 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1014 19:40:33.078991  437269 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1014 19:40:33.078998  437269 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1014 19:40:33.079004  437269 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1014 19:40:33.079011  437269 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1014 19:40:33.079015  437269 command_runner.go:130] > [crio.runtime]
	I1014 19:40:33.079021  437269 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1014 19:40:33.079028  437269 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1014 19:40:33.079032  437269 command_runner.go:130] > # "nofile=1024:2048"
	I1014 19:40:33.079040  437269 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1014 19:40:33.079046  437269 command_runner.go:130] > # default_ulimits = [
	I1014 19:40:33.079049  437269 command_runner.go:130] > # ]
	I1014 19:40:33.079054  437269 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1014 19:40:33.079060  437269 command_runner.go:130] > # no_pivot = false
	I1014 19:40:33.079065  437269 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1014 19:40:33.079073  437269 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1014 19:40:33.079078  437269 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1014 19:40:33.079086  437269 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1014 19:40:33.079090  437269 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1014 19:40:33.079099  437269 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1014 19:40:33.079105  437269 command_runner.go:130] > # conmon = ""
	I1014 19:40:33.079109  437269 command_runner.go:130] > # Cgroup setting for conmon
	I1014 19:40:33.079117  437269 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1014 19:40:33.079123  437269 command_runner.go:130] > conmon_cgroup = "pod"
	I1014 19:40:33.079129  437269 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1014 19:40:33.079136  437269 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1014 19:40:33.079142  437269 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1014 19:40:33.079147  437269 command_runner.go:130] > # conmon_env = [
	I1014 19:40:33.079150  437269 command_runner.go:130] > # ]
	I1014 19:40:33.079155  437269 command_runner.go:130] > # Additional environment variables to set for all the
	I1014 19:40:33.079163  437269 command_runner.go:130] > # containers. These are overridden if set in the
	I1014 19:40:33.079169  437269 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1014 19:40:33.079175  437269 command_runner.go:130] > # default_env = [
	I1014 19:40:33.079177  437269 command_runner.go:130] > # ]
	I1014 19:40:33.079183  437269 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1014 19:40:33.079192  437269 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1014 19:40:33.079198  437269 command_runner.go:130] > # selinux = false
	I1014 19:40:33.079204  437269 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1014 19:40:33.079210  437269 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1014 19:40:33.079219  437269 command_runner.go:130] > # This option supports live configuration reload.
	I1014 19:40:33.079225  437269 command_runner.go:130] > # seccomp_profile = ""
	I1014 19:40:33.079231  437269 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1014 19:40:33.079237  437269 command_runner.go:130] > # This option supports live configuration reload.
	I1014 19:40:33.079242  437269 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1014 19:40:33.079250  437269 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1014 19:40:33.079258  437269 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1014 19:40:33.079264  437269 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1014 19:40:33.079273  437269 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1014 19:40:33.079279  437269 command_runner.go:130] > # This option supports live configuration reload.
	I1014 19:40:33.079284  437269 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1014 19:40:33.079291  437269 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1014 19:40:33.079295  437269 command_runner.go:130] > # the cgroup blockio controller.
	I1014 19:40:33.079301  437269 command_runner.go:130] > # blockio_config_file = ""
	I1014 19:40:33.079308  437269 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1014 19:40:33.079314  437269 command_runner.go:130] > # blockio parameters.
	I1014 19:40:33.079317  437269 command_runner.go:130] > # blockio_reload = false
	I1014 19:40:33.079325  437269 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1014 19:40:33.079329  437269 command_runner.go:130] > # irqbalance daemon.
	I1014 19:40:33.079336  437269 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1014 19:40:33.079342  437269 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1014 19:40:33.079351  437269 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1014 19:40:33.079360  437269 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1014 19:40:33.079367  437269 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1014 19:40:33.079374  437269 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1014 19:40:33.079380  437269 command_runner.go:130] > # This option supports live configuration reload.
	I1014 19:40:33.079385  437269 command_runner.go:130] > # rdt_config_file = ""
	I1014 19:40:33.079393  437269 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1014 19:40:33.079396  437269 command_runner.go:130] > # cgroup_manager = "systemd"
	I1014 19:40:33.079402  437269 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1014 19:40:33.079407  437269 command_runner.go:130] > # separate_pull_cgroup = ""
	I1014 19:40:33.079413  437269 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1014 19:40:33.079421  437269 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1014 19:40:33.079427  437269 command_runner.go:130] > # will be added.
	I1014 19:40:33.079430  437269 command_runner.go:130] > # default_capabilities = [
	I1014 19:40:33.079433  437269 command_runner.go:130] > # 	"CHOWN",
	I1014 19:40:33.079439  437269 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1014 19:40:33.079442  437269 command_runner.go:130] > # 	"FSETID",
	I1014 19:40:33.079445  437269 command_runner.go:130] > # 	"FOWNER",
	I1014 19:40:33.079451  437269 command_runner.go:130] > # 	"SETGID",
	I1014 19:40:33.079466  437269 command_runner.go:130] > # 	"SETUID",
	I1014 19:40:33.079472  437269 command_runner.go:130] > # 	"SETPCAP",
	I1014 19:40:33.079475  437269 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1014 19:40:33.079480  437269 command_runner.go:130] > # 	"KILL",
	I1014 19:40:33.079484  437269 command_runner.go:130] > # ]
	I1014 19:40:33.079493  437269 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1014 19:40:33.079501  437269 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1014 19:40:33.079508  437269 command_runner.go:130] > # add_inheritable_capabilities = false
	I1014 19:40:33.079514  437269 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1014 19:40:33.079522  437269 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1014 19:40:33.079526  437269 command_runner.go:130] > default_sysctls = [
	I1014 19:40:33.079530  437269 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1014 19:40:33.079536  437269 command_runner.go:130] > ]
	I1014 19:40:33.079540  437269 command_runner.go:130] > # List of devices on the host that a
	I1014 19:40:33.079548  437269 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1014 19:40:33.079553  437269 command_runner.go:130] > # allowed_devices = [
	I1014 19:40:33.079557  437269 command_runner.go:130] > # 	"/dev/fuse",
	I1014 19:40:33.079563  437269 command_runner.go:130] > # 	"/dev/net/tun",
	I1014 19:40:33.079566  437269 command_runner.go:130] > # ]
	I1014 19:40:33.079574  437269 command_runner.go:130] > # List of additional devices. specified as
	I1014 19:40:33.079581  437269 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1014 19:40:33.079588  437269 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1014 19:40:33.079595  437269 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1014 19:40:33.079601  437269 command_runner.go:130] > # additional_devices = [
	I1014 19:40:33.079604  437269 command_runner.go:130] > # ]
	I1014 19:40:33.079611  437269 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1014 19:40:33.079615  437269 command_runner.go:130] > # cdi_spec_dirs = [
	I1014 19:40:33.079619  437269 command_runner.go:130] > # 	"/etc/cdi",
	I1014 19:40:33.079625  437269 command_runner.go:130] > # 	"/var/run/cdi",
	I1014 19:40:33.079628  437269 command_runner.go:130] > # ]
	I1014 19:40:33.079633  437269 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1014 19:40:33.079641  437269 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1014 19:40:33.079645  437269 command_runner.go:130] > # Defaults to false.
	I1014 19:40:33.079652  437269 command_runner.go:130] > # device_ownership_from_security_context = false
	I1014 19:40:33.079659  437269 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1014 19:40:33.079666  437269 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1014 19:40:33.079670  437269 command_runner.go:130] > # hooks_dir = [
	I1014 19:40:33.079682  437269 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1014 19:40:33.079687  437269 command_runner.go:130] > # ]
	I1014 19:40:33.079693  437269 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1014 19:40:33.079701  437269 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1014 19:40:33.079706  437269 command_runner.go:130] > # its default mounts from the following two files:
	I1014 19:40:33.079712  437269 command_runner.go:130] > #
	I1014 19:40:33.079718  437269 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1014 19:40:33.079726  437269 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1014 19:40:33.079734  437269 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1014 19:40:33.079737  437269 command_runner.go:130] > #
	I1014 19:40:33.079743  437269 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1014 19:40:33.079751  437269 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1014 19:40:33.079780  437269 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1014 19:40:33.079788  437269 command_runner.go:130] > #      only add mounts it finds in this file.
	I1014 19:40:33.079791  437269 command_runner.go:130] > #
	I1014 19:40:33.079797  437269 command_runner.go:130] > # default_mounts_file = ""
	I1014 19:40:33.079804  437269 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1014 19:40:33.079811  437269 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1014 19:40:33.079816  437269 command_runner.go:130] > # pids_limit = -1
	I1014 19:40:33.079822  437269 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1014 19:40:33.079830  437269 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1014 19:40:33.079839  437269 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1014 19:40:33.079846  437269 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1014 19:40:33.079852  437269 command_runner.go:130] > # log_size_max = -1
	I1014 19:40:33.079858  437269 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1014 19:40:33.079864  437269 command_runner.go:130] > # log_to_journald = false
	I1014 19:40:33.079870  437269 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1014 19:40:33.079878  437269 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1014 19:40:33.079883  437269 command_runner.go:130] > # Path to directory for container attach sockets.
	I1014 19:40:33.079890  437269 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1014 19:40:33.079895  437269 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1014 19:40:33.079901  437269 command_runner.go:130] > # bind_mount_prefix = ""
	I1014 19:40:33.079906  437269 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1014 19:40:33.079912  437269 command_runner.go:130] > # read_only = false
	I1014 19:40:33.079917  437269 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1014 19:40:33.079926  437269 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1014 19:40:33.079933  437269 command_runner.go:130] > # live configuration reload.
	I1014 19:40:33.079937  437269 command_runner.go:130] > # log_level = "info"
	I1014 19:40:33.079942  437269 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1014 19:40:33.079950  437269 command_runner.go:130] > # This option supports live configuration reload.
	I1014 19:40:33.079953  437269 command_runner.go:130] > # log_filter = ""
	I1014 19:40:33.079959  437269 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1014 19:40:33.079967  437269 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1014 19:40:33.079970  437269 command_runner.go:130] > # separated by comma.
	I1014 19:40:33.079978  437269 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1014 19:40:33.079983  437269 command_runner.go:130] > # uid_mappings = ""
	I1014 19:40:33.079989  437269 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1014 19:40:33.079997  437269 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1014 19:40:33.080005  437269 command_runner.go:130] > # separated by comma.
	I1014 19:40:33.080014  437269 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1014 19:40:33.080020  437269 command_runner.go:130] > # gid_mappings = ""
	I1014 19:40:33.080026  437269 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1014 19:40:33.080035  437269 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1014 19:40:33.080043  437269 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1014 19:40:33.080049  437269 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1014 19:40:33.080055  437269 command_runner.go:130] > # minimum_mappable_uid = -1
	I1014 19:40:33.080061  437269 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1014 19:40:33.080069  437269 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1014 19:40:33.080075  437269 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1014 19:40:33.080085  437269 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1014 19:40:33.080090  437269 command_runner.go:130] > # minimum_mappable_gid = -1
	I1014 19:40:33.080096  437269 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1014 19:40:33.080112  437269 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1014 19:40:33.080120  437269 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1014 19:40:33.080124  437269 command_runner.go:130] > # ctr_stop_timeout = 30
	I1014 19:40:33.080131  437269 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1014 19:40:33.080138  437269 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1014 19:40:33.080144  437269 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1014 19:40:33.080149  437269 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1014 19:40:33.080155  437269 command_runner.go:130] > # drop_infra_ctr = true
	I1014 19:40:33.080160  437269 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1014 19:40:33.080168  437269 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1014 19:40:33.080175  437269 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1014 19:40:33.080181  437269 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1014 19:40:33.080188  437269 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1014 19:40:33.080195  437269 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1014 19:40:33.080200  437269 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1014 19:40:33.080207  437269 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1014 19:40:33.080211  437269 command_runner.go:130] > # shared_cpuset = ""
	I1014 19:40:33.080219  437269 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1014 19:40:33.080223  437269 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1014 19:40:33.080230  437269 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1014 19:40:33.080237  437269 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1014 19:40:33.080243  437269 command_runner.go:130] > # pinns_path = ""
	I1014 19:40:33.080249  437269 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1014 19:40:33.080256  437269 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1014 19:40:33.080261  437269 command_runner.go:130] > # enable_criu_support = true
	I1014 19:40:33.080268  437269 command_runner.go:130] > # Enable/disable the generation of the container,
	I1014 19:40:33.080273  437269 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1014 19:40:33.080280  437269 command_runner.go:130] > # enable_pod_events = false
	I1014 19:40:33.080285  437269 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1014 19:40:33.080292  437269 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1014 19:40:33.080296  437269 command_runner.go:130] > # default_runtime = "crun"
	I1014 19:40:33.080301  437269 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1014 19:40:33.080310  437269 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1014 19:40:33.080320  437269 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1014 19:40:33.080325  437269 command_runner.go:130] > # creation as a file is not desired either.
	I1014 19:40:33.080336  437269 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1014 19:40:33.080342  437269 command_runner.go:130] > # the hostname is being managed dynamically.
	I1014 19:40:33.080346  437269 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1014 19:40:33.080352  437269 command_runner.go:130] > # ]
	I1014 19:40:33.080357  437269 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1014 19:40:33.080365  437269 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1014 19:40:33.080373  437269 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1014 19:40:33.080378  437269 command_runner.go:130] > # Each entry in the table should follow the format:
	I1014 19:40:33.080382  437269 command_runner.go:130] > #
	I1014 19:40:33.080387  437269 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1014 19:40:33.080394  437269 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1014 19:40:33.080397  437269 command_runner.go:130] > # runtime_type = "oci"
	I1014 19:40:33.080404  437269 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1014 19:40:33.080408  437269 command_runner.go:130] > # inherit_default_runtime = false
	I1014 19:40:33.080413  437269 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1014 19:40:33.080419  437269 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1014 19:40:33.080424  437269 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1014 19:40:33.080430  437269 command_runner.go:130] > # monitor_env = []
	I1014 19:40:33.080435  437269 command_runner.go:130] > # privileged_without_host_devices = false
	I1014 19:40:33.080440  437269 command_runner.go:130] > # allowed_annotations = []
	I1014 19:40:33.080445  437269 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1014 19:40:33.080451  437269 command_runner.go:130] > # no_sync_log = false
	I1014 19:40:33.080455  437269 command_runner.go:130] > # default_annotations = {}
	I1014 19:40:33.080461  437269 command_runner.go:130] > # stream_websockets = false
	I1014 19:40:33.080465  437269 command_runner.go:130] > # seccomp_profile = ""
	I1014 19:40:33.080487  437269 command_runner.go:130] > # Where:
	I1014 19:40:33.080494  437269 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1014 19:40:33.080500  437269 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1014 19:40:33.080508  437269 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1014 19:40:33.080514  437269 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1014 19:40:33.080519  437269 command_runner.go:130] > #   in $PATH.
	I1014 19:40:33.080525  437269 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1014 19:40:33.080532  437269 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1014 19:40:33.080538  437269 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1014 19:40:33.080543  437269 command_runner.go:130] > #   state.
	I1014 19:40:33.080552  437269 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1014 19:40:33.080560  437269 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1014 19:40:33.080565  437269 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1014 19:40:33.080573  437269 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1014 19:40:33.080578  437269 command_runner.go:130] > #   the values from the default runtime on load time.
	I1014 19:40:33.080586  437269 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1014 19:40:33.080591  437269 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1014 19:40:33.080599  437269 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1014 19:40:33.080605  437269 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1014 19:40:33.080612  437269 command_runner.go:130] > #   The currently recognized values are:
	I1014 19:40:33.080618  437269 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1014 19:40:33.080627  437269 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1014 19:40:33.080636  437269 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1014 19:40:33.080641  437269 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1014 19:40:33.080651  437269 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1014 19:40:33.080660  437269 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1014 19:40:33.080669  437269 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1014 19:40:33.080680  437269 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1014 19:40:33.080687  437269 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1014 19:40:33.080693  437269 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1014 19:40:33.080702  437269 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1014 19:40:33.080710  437269 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1014 19:40:33.080715  437269 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1014 19:40:33.080724  437269 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1014 19:40:33.080732  437269 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1014 19:40:33.080738  437269 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1014 19:40:33.080747  437269 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1014 19:40:33.080751  437269 command_runner.go:130] > #   deprecated option "conmon".
	I1014 19:40:33.080773  437269 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1014 19:40:33.080783  437269 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1014 19:40:33.080796  437269 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1014 19:40:33.080803  437269 command_runner.go:130] > #   should be moved to the container's cgroup
	I1014 19:40:33.080810  437269 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1014 19:40:33.080817  437269 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1014 19:40:33.080824  437269 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1014 19:40:33.080830  437269 command_runner.go:130] > #   conmon-rs by using:
	I1014 19:40:33.080837  437269 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1014 19:40:33.080847  437269 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1014 19:40:33.080857  437269 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1014 19:40:33.080865  437269 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1014 19:40:33.080872  437269 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1014 19:40:33.080879  437269 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1014 19:40:33.080888  437269 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1014 19:40:33.080894  437269 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1014 19:40:33.080904  437269 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1014 19:40:33.080915  437269 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1014 19:40:33.080921  437269 command_runner.go:130] > #   when a machine crash happens.
	I1014 19:40:33.080929  437269 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1014 19:40:33.080939  437269 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1014 19:40:33.080949  437269 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1014 19:40:33.080955  437269 command_runner.go:130] > #   seccomp profile for the runtime.
	I1014 19:40:33.080961  437269 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1014 19:40:33.080970  437269 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1014 19:40:33.080975  437269 command_runner.go:130] > #
	I1014 19:40:33.080980  437269 command_runner.go:130] > # Using the seccomp notifier feature:
	I1014 19:40:33.080985  437269 command_runner.go:130] > #
	I1014 19:40:33.080991  437269 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1014 19:40:33.080998  437269 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1014 19:40:33.081002  437269 command_runner.go:130] > #
	I1014 19:40:33.081007  437269 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1014 19:40:33.081015  437269 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1014 19:40:33.081020  437269 command_runner.go:130] > #
	I1014 19:40:33.081026  437269 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1014 19:40:33.081032  437269 command_runner.go:130] > # feature.
	I1014 19:40:33.081035  437269 command_runner.go:130] > #
	I1014 19:40:33.081042  437269 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1014 19:40:33.081048  437269 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1014 19:40:33.081057  437269 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1014 19:40:33.081062  437269 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1014 19:40:33.081070  437269 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1014 19:40:33.081073  437269 command_runner.go:130] > #
	I1014 19:40:33.081079  437269 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1014 19:40:33.081087  437269 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1014 19:40:33.081090  437269 command_runner.go:130] > #
	I1014 19:40:33.081096  437269 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1014 19:40:33.081103  437269 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1014 19:40:33.081106  437269 command_runner.go:130] > #
	I1014 19:40:33.081112  437269 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1014 19:40:33.081119  437269 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1014 19:40:33.081122  437269 command_runner.go:130] > # limitation.
	I1014 19:40:33.081129  437269 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1014 19:40:33.081138  437269 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1014 19:40:33.081143  437269 command_runner.go:130] > runtime_type = ""
	I1014 19:40:33.081147  437269 command_runner.go:130] > runtime_root = "/run/crun"
	I1014 19:40:33.081151  437269 command_runner.go:130] > inherit_default_runtime = false
	I1014 19:40:33.081157  437269 command_runner.go:130] > runtime_config_path = ""
	I1014 19:40:33.081161  437269 command_runner.go:130] > container_min_memory = ""
	I1014 19:40:33.081167  437269 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1014 19:40:33.081171  437269 command_runner.go:130] > monitor_cgroup = "pod"
	I1014 19:40:33.081177  437269 command_runner.go:130] > monitor_exec_cgroup = ""
	I1014 19:40:33.081181  437269 command_runner.go:130] > allowed_annotations = [
	I1014 19:40:33.081187  437269 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1014 19:40:33.081190  437269 command_runner.go:130] > ]
	I1014 19:40:33.081197  437269 command_runner.go:130] > privileged_without_host_devices = false
	I1014 19:40:33.081201  437269 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1014 19:40:33.081208  437269 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1014 19:40:33.081212  437269 command_runner.go:130] > runtime_type = ""
	I1014 19:40:33.081218  437269 command_runner.go:130] > runtime_root = "/run/runc"
	I1014 19:40:33.081222  437269 command_runner.go:130] > inherit_default_runtime = false
	I1014 19:40:33.081229  437269 command_runner.go:130] > runtime_config_path = ""
	I1014 19:40:33.081234  437269 command_runner.go:130] > container_min_memory = ""
	I1014 19:40:33.081241  437269 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1014 19:40:33.081245  437269 command_runner.go:130] > monitor_cgroup = "pod"
	I1014 19:40:33.081251  437269 command_runner.go:130] > monitor_exec_cgroup = ""
	I1014 19:40:33.081256  437269 command_runner.go:130] > privileged_without_host_devices = false
	I1014 19:40:33.081264  437269 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1014 19:40:33.081271  437269 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1014 19:40:33.081277  437269 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1014 19:40:33.081286  437269 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1014 19:40:33.081298  437269 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1014 19:40:33.081309  437269 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1014 19:40:33.081318  437269 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1014 19:40:33.081324  437269 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1014 19:40:33.081335  437269 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1014 19:40:33.081345  437269 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1014 19:40:33.081353  437269 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1014 19:40:33.081359  437269 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1014 19:40:33.081365  437269 command_runner.go:130] > # Example:
	I1014 19:40:33.081369  437269 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1014 19:40:33.081375  437269 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1014 19:40:33.081380  437269 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1014 19:40:33.081389  437269 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1014 19:40:33.081395  437269 command_runner.go:130] > # cpuset = "0-1"
	I1014 19:40:33.081399  437269 command_runner.go:130] > # cpushares = "5"
	I1014 19:40:33.081405  437269 command_runner.go:130] > # cpuquota = "1000"
	I1014 19:40:33.081408  437269 command_runner.go:130] > # cpuperiod = "100000"
	I1014 19:40:33.081412  437269 command_runner.go:130] > # cpulimit = "35"
	I1014 19:40:33.081417  437269 command_runner.go:130] > # Where:
	I1014 19:40:33.081421  437269 command_runner.go:130] > # The workload name is workload-type.
	I1014 19:40:33.081430  437269 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1014 19:40:33.081438  437269 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1014 19:40:33.081443  437269 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1014 19:40:33.081453  437269 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1014 19:40:33.081470  437269 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1014 19:40:33.081477  437269 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1014 19:40:33.081484  437269 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1014 19:40:33.081490  437269 command_runner.go:130] > # Default value is set to true
	I1014 19:40:33.081494  437269 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1014 19:40:33.081499  437269 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1014 19:40:33.081505  437269 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1014 19:40:33.081510  437269 command_runner.go:130] > # Default value is set to 'false'
	I1014 19:40:33.081516  437269 command_runner.go:130] > # disable_hostport_mapping = false
	I1014 19:40:33.081522  437269 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1014 19:40:33.081531  437269 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1014 19:40:33.081537  437269 command_runner.go:130] > # timezone = ""
	I1014 19:40:33.081543  437269 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1014 19:40:33.081549  437269 command_runner.go:130] > #
	I1014 19:40:33.081555  437269 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1014 19:40:33.081563  437269 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1014 19:40:33.081567  437269 command_runner.go:130] > [crio.image]
	I1014 19:40:33.081575  437269 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1014 19:40:33.081579  437269 command_runner.go:130] > # default_transport = "docker://"
	I1014 19:40:33.081585  437269 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1014 19:40:33.081593  437269 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1014 19:40:33.081597  437269 command_runner.go:130] > # global_auth_file = ""
	I1014 19:40:33.081604  437269 command_runner.go:130] > # The image used to instantiate infra containers.
	I1014 19:40:33.081609  437269 command_runner.go:130] > # This option supports live configuration reload.
	I1014 19:40:33.081616  437269 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1014 19:40:33.081622  437269 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1014 19:40:33.081630  437269 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1014 19:40:33.081634  437269 command_runner.go:130] > # This option supports live configuration reload.
	I1014 19:40:33.081639  437269 command_runner.go:130] > # pause_image_auth_file = ""
	I1014 19:40:33.081645  437269 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1014 19:40:33.081653  437269 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1014 19:40:33.081658  437269 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1014 19:40:33.081666  437269 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1014 19:40:33.081671  437269 command_runner.go:130] > # pause_command = "/pause"
	I1014 19:40:33.081682  437269 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1014 19:40:33.081690  437269 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1014 19:40:33.081695  437269 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1014 19:40:33.081703  437269 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1014 19:40:33.081709  437269 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1014 19:40:33.081717  437269 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1014 19:40:33.081723  437269 command_runner.go:130] > # pinned_images = [
	I1014 19:40:33.081725  437269 command_runner.go:130] > # ]
	I1014 19:40:33.081731  437269 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1014 19:40:33.081739  437269 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1014 19:40:33.081745  437269 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1014 19:40:33.081762  437269 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1014 19:40:33.081774  437269 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1014 19:40:33.081781  437269 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1014 19:40:33.081789  437269 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1014 19:40:33.081795  437269 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1014 19:40:33.081804  437269 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1014 19:40:33.081813  437269 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1014 19:40:33.081822  437269 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1014 19:40:33.081833  437269 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1014 19:40:33.081841  437269 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1014 19:40:33.081847  437269 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1014 19:40:33.081853  437269 command_runner.go:130] > # changing them here.
	I1014 19:40:33.081859  437269 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1014 19:40:33.081865  437269 command_runner.go:130] > # insecure_registries = [
	I1014 19:40:33.081868  437269 command_runner.go:130] > # ]
	I1014 19:40:33.081877  437269 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1014 19:40:33.081887  437269 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1014 19:40:33.081893  437269 command_runner.go:130] > # image_volumes = "mkdir"
	I1014 19:40:33.081898  437269 command_runner.go:130] > # Temporary directory to use for storing big files
	I1014 19:40:33.081904  437269 command_runner.go:130] > # big_files_temporary_dir = ""
	I1014 19:40:33.081910  437269 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1014 19:40:33.081918  437269 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1014 19:40:33.081925  437269 command_runner.go:130] > # auto_reload_registries = false
	I1014 19:40:33.081932  437269 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1014 19:40:33.081940  437269 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1014 19:40:33.081947  437269 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1014 19:40:33.081951  437269 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1014 19:40:33.081958  437269 command_runner.go:130] > # The mode of short name resolution.
	I1014 19:40:33.081966  437269 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1014 19:40:33.081977  437269 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1014 19:40:33.081984  437269 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1014 19:40:33.081989  437269 command_runner.go:130] > # short_name_mode = "enforcing"
	I1014 19:40:33.081997  437269 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1014 19:40:33.082002  437269 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1014 19:40:33.082009  437269 command_runner.go:130] > # oci_artifact_mount_support = true
	I1014 19:40:33.082015  437269 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1014 19:40:33.082021  437269 command_runner.go:130] > # CNI plugins.
	I1014 19:40:33.082025  437269 command_runner.go:130] > [crio.network]
	I1014 19:40:33.082033  437269 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1014 19:40:33.082040  437269 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1014 19:40:33.082044  437269 command_runner.go:130] > # cni_default_network = ""
	I1014 19:40:33.082052  437269 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1014 19:40:33.082056  437269 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1014 19:40:33.082064  437269 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1014 19:40:33.082068  437269 command_runner.go:130] > # plugin_dirs = [
	I1014 19:40:33.082071  437269 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1014 19:40:33.082074  437269 command_runner.go:130] > # ]
	I1014 19:40:33.082078  437269 command_runner.go:130] > # List of included pod metrics.
	I1014 19:40:33.082082  437269 command_runner.go:130] > # included_pod_metrics = [
	I1014 19:40:33.082085  437269 command_runner.go:130] > # ]
	I1014 19:40:33.082089  437269 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1014 19:40:33.082092  437269 command_runner.go:130] > [crio.metrics]
	I1014 19:40:33.082097  437269 command_runner.go:130] > # Globally enable or disable metrics support.
	I1014 19:40:33.082100  437269 command_runner.go:130] > # enable_metrics = false
	I1014 19:40:33.082104  437269 command_runner.go:130] > # Specify enabled metrics collectors.
	I1014 19:40:33.082108  437269 command_runner.go:130] > # Per default all metrics are enabled.
	I1014 19:40:33.082114  437269 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1014 19:40:33.082119  437269 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1014 19:40:33.082124  437269 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1014 19:40:33.082128  437269 command_runner.go:130] > # metrics_collectors = [
	I1014 19:40:33.082131  437269 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1014 19:40:33.082135  437269 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1014 19:40:33.082139  437269 command_runner.go:130] > # 	"containers_oom_total",
	I1014 19:40:33.082142  437269 command_runner.go:130] > # 	"processes_defunct",
	I1014 19:40:33.082146  437269 command_runner.go:130] > # 	"operations_total",
	I1014 19:40:33.082150  437269 command_runner.go:130] > # 	"operations_latency_seconds",
	I1014 19:40:33.082154  437269 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1014 19:40:33.082157  437269 command_runner.go:130] > # 	"operations_errors_total",
	I1014 19:40:33.082162  437269 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1014 19:40:33.082169  437269 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1014 19:40:33.082173  437269 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1014 19:40:33.082178  437269 command_runner.go:130] > # 	"image_pulls_success_total",
	I1014 19:40:33.082182  437269 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1014 19:40:33.082188  437269 command_runner.go:130] > # 	"containers_oom_count_total",
	I1014 19:40:33.082193  437269 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1014 19:40:33.082199  437269 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1014 19:40:33.082203  437269 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1014 19:40:33.082208  437269 command_runner.go:130] > # ]
	I1014 19:40:33.082214  437269 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1014 19:40:33.082219  437269 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1014 19:40:33.082224  437269 command_runner.go:130] > # The port on which the metrics server will listen.
	I1014 19:40:33.082227  437269 command_runner.go:130] > # metrics_port = 9090
	I1014 19:40:33.082234  437269 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1014 19:40:33.082238  437269 command_runner.go:130] > # metrics_socket = ""
	I1014 19:40:33.082245  437269 command_runner.go:130] > # The certificate for the secure metrics server.
	I1014 19:40:33.082250  437269 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1014 19:40:33.082258  437269 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1014 19:40:33.082263  437269 command_runner.go:130] > # certificate on any modification event.
	I1014 19:40:33.082269  437269 command_runner.go:130] > # metrics_cert = ""
	I1014 19:40:33.082274  437269 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1014 19:40:33.082280  437269 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1014 19:40:33.082284  437269 command_runner.go:130] > # metrics_key = ""
	I1014 19:40:33.082292  437269 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1014 19:40:33.082295  437269 command_runner.go:130] > [crio.tracing]
	I1014 19:40:33.082300  437269 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1014 19:40:33.082306  437269 command_runner.go:130] > # enable_tracing = false
	I1014 19:40:33.082311  437269 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1014 19:40:33.082317  437269 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1014 19:40:33.082324  437269 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1014 19:40:33.082330  437269 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1014 19:40:33.082334  437269 command_runner.go:130] > # CRI-O NRI configuration.
	I1014 19:40:33.082340  437269 command_runner.go:130] > [crio.nri]
	I1014 19:40:33.082345  437269 command_runner.go:130] > # Globally enable or disable NRI.
	I1014 19:40:33.082350  437269 command_runner.go:130] > # enable_nri = true
	I1014 19:40:33.082354  437269 command_runner.go:130] > # NRI socket to listen on.
	I1014 19:40:33.082361  437269 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1014 19:40:33.082365  437269 command_runner.go:130] > # NRI plugin directory to use.
	I1014 19:40:33.082372  437269 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1014 19:40:33.082376  437269 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1014 19:40:33.082383  437269 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1014 19:40:33.082388  437269 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1014 19:40:33.082423  437269 command_runner.go:130] > # nri_disable_connections = false
	I1014 19:40:33.082431  437269 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1014 19:40:33.082435  437269 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1014 19:40:33.082440  437269 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1014 19:40:33.082444  437269 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1014 19:40:33.082451  437269 command_runner.go:130] > # NRI default validator configuration.
	I1014 19:40:33.082457  437269 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1014 19:40:33.082466  437269 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1014 19:40:33.082472  437269 command_runner.go:130] > # can be restricted/rejected:
	I1014 19:40:33.082476  437269 command_runner.go:130] > # - OCI hook injection
	I1014 19:40:33.082483  437269 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1014 19:40:33.082487  437269 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1014 19:40:33.082494  437269 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1014 19:40:33.082498  437269 command_runner.go:130] > # - adjustment of linux namespaces
	I1014 19:40:33.082506  437269 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1014 19:40:33.082514  437269 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1014 19:40:33.082519  437269 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1014 19:40:33.082524  437269 command_runner.go:130] > #
	I1014 19:40:33.082528  437269 command_runner.go:130] > # [crio.nri.default_validator]
	I1014 19:40:33.082535  437269 command_runner.go:130] > # nri_enable_default_validator = false
	I1014 19:40:33.082539  437269 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1014 19:40:33.082546  437269 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1014 19:40:33.082551  437269 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1014 19:40:33.082559  437269 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1014 19:40:33.082564  437269 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1014 19:40:33.082570  437269 command_runner.go:130] > # nri_validator_required_plugins = [
	I1014 19:40:33.082573  437269 command_runner.go:130] > # ]
	I1014 19:40:33.082582  437269 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1014 19:40:33.082587  437269 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1014 19:40:33.082593  437269 command_runner.go:130] > [crio.stats]
	I1014 19:40:33.082598  437269 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1014 19:40:33.082608  437269 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1014 19:40:33.082614  437269 command_runner.go:130] > # stats_collection_period = 0
	I1014 19:40:33.082619  437269 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1014 19:40:33.082628  437269 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1014 19:40:33.082631  437269 command_runner.go:130] > # collection_period = 0
	I1014 19:40:33.082741  437269 cni.go:84] Creating CNI manager for ""
	I1014 19:40:33.082769  437269 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1014 19:40:33.082789  437269 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1014 19:40:33.082811  437269 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-744288 NodeName:functional-744288 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1014 19:40:33.082940  437269 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-744288"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 19:40:33.083002  437269 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1014 19:40:33.091321  437269 command_runner.go:130] > kubeadm
	I1014 19:40:33.091339  437269 command_runner.go:130] > kubectl
	I1014 19:40:33.091351  437269 command_runner.go:130] > kubelet
	I1014 19:40:33.091376  437269 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 19:40:33.091429  437269 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1014 19:40:33.099086  437269 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1014 19:40:33.111962  437269 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 19:40:33.125422  437269 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1014 19:40:33.138383  437269 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1014 19:40:33.142436  437269 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1014 19:40:33.142515  437269 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 19:40:33.229714  437269 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 19:40:33.242948  437269 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288 for IP: 192.168.49.2
	I1014 19:40:33.242967  437269 certs.go:195] generating shared ca certs ...
	I1014 19:40:33.242983  437269 certs.go:227] acquiring lock for ca certs: {Name:mk332cc83f1e1747534fc81e896bed94f24941d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 19:40:33.243111  437269 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.key
	I1014 19:40:33.243147  437269 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.key
	I1014 19:40:33.243157  437269 certs.go:257] generating profile certs ...
	I1014 19:40:33.243244  437269 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/client.key
	I1014 19:40:33.243295  437269 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/apiserver.key.d065d9e2
	I1014 19:40:33.243331  437269 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/proxy-client.key
	I1014 19:40:33.243342  437269 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1014 19:40:33.243354  437269 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1014 19:40:33.243366  437269 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1014 19:40:33.243378  437269 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1014 19:40:33.243389  437269 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1014 19:40:33.243402  437269 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1014 19:40:33.243414  437269 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1014 19:40:33.243426  437269 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1014 19:40:33.243468  437269 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/417373.pem (1338 bytes)
	W1014 19:40:33.243499  437269 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-413763/.minikube/certs/417373_empty.pem, impossibly tiny 0 bytes
	I1014 19:40:33.243509  437269 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca-key.pem (1675 bytes)
	I1014 19:40:33.243528  437269 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem (1078 bytes)
	I1014 19:40:33.243550  437269 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem (1123 bytes)
	I1014 19:40:33.243570  437269 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem (1675 bytes)
	I1014 19:40:33.243605  437269 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem (1708 bytes)
	I1014 19:40:33.243631  437269 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1014 19:40:33.243646  437269 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/417373.pem -> /usr/share/ca-certificates/417373.pem
	I1014 19:40:33.243657  437269 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem -> /usr/share/ca-certificates/4173732.pem
	I1014 19:40:33.244241  437269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 19:40:33.262628  437269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1014 19:40:33.280949  437269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 19:40:33.299645  437269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1014 19:40:33.318581  437269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1014 19:40:33.336772  437269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1014 19:40:33.354893  437269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 19:40:33.372224  437269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1014 19:40:33.389816  437269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 19:40:33.407785  437269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/certs/417373.pem --> /usr/share/ca-certificates/417373.pem (1338 bytes)
	I1014 19:40:33.425006  437269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem --> /usr/share/ca-certificates/4173732.pem (1708 bytes)
	I1014 19:40:33.442414  437269 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 19:40:33.455418  437269 ssh_runner.go:195] Run: openssl version
	I1014 19:40:33.461786  437269 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1014 19:40:33.461878  437269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/417373.pem && ln -fs /usr/share/ca-certificates/417373.pem /etc/ssl/certs/417373.pem"
	I1014 19:40:33.470707  437269 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/417373.pem
	I1014 19:40:33.474930  437269 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct 14 19:32 /usr/share/ca-certificates/417373.pem
	I1014 19:40:33.474991  437269 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 19:32 /usr/share/ca-certificates/417373.pem
	I1014 19:40:33.475040  437269 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/417373.pem
	I1014 19:40:33.510084  437269 command_runner.go:130] > 51391683
	I1014 19:40:33.510386  437269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/417373.pem /etc/ssl/certs/51391683.0"
	I1014 19:40:33.519147  437269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4173732.pem && ln -fs /usr/share/ca-certificates/4173732.pem /etc/ssl/certs/4173732.pem"
	I1014 19:40:33.528110  437269 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4173732.pem
	I1014 19:40:33.532126  437269 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct 14 19:32 /usr/share/ca-certificates/4173732.pem
	I1014 19:40:33.532195  437269 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 19:32 /usr/share/ca-certificates/4173732.pem
	I1014 19:40:33.532237  437269 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4173732.pem
	I1014 19:40:33.566452  437269 command_runner.go:130] > 3ec20f2e
	I1014 19:40:33.566529  437269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4173732.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 19:40:33.575059  437269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 19:40:33.583998  437269 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 19:40:33.587961  437269 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct 14 19:15 /usr/share/ca-certificates/minikubeCA.pem
	I1014 19:40:33.588033  437269 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 19:15 /usr/share/ca-certificates/minikubeCA.pem
	I1014 19:40:33.588081  437269 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 19:40:33.622398  437269 command_runner.go:130] > b5213941
	I1014 19:40:33.622796  437269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 19:40:33.631371  437269 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 19:40:33.635295  437269 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 19:40:33.635320  437269 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1014 19:40:33.635326  437269 command_runner.go:130] > Device: 8,1	Inode: 573968      Links: 1
	I1014 19:40:33.635332  437269 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1014 19:40:33.635341  437269 command_runner.go:130] > Access: 2025-10-14 19:36:24.950222095 +0000
	I1014 19:40:33.635346  437269 command_runner.go:130] > Modify: 2025-10-14 19:32:20.041123235 +0000
	I1014 19:40:33.635350  437269 command_runner.go:130] > Change: 2025-10-14 19:32:20.041123235 +0000
	I1014 19:40:33.635355  437269 command_runner.go:130] >  Birth: 2025-10-14 19:32:20.041123235 +0000
	I1014 19:40:33.635409  437269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1014 19:40:33.669731  437269 command_runner.go:130] > Certificate will not expire
	I1014 19:40:33.670080  437269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1014 19:40:33.705048  437269 command_runner.go:130] > Certificate will not expire
	I1014 19:40:33.705140  437269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1014 19:40:33.739547  437269 command_runner.go:130] > Certificate will not expire
	I1014 19:40:33.739632  437269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1014 19:40:33.774590  437269 command_runner.go:130] > Certificate will not expire
	I1014 19:40:33.774998  437269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1014 19:40:33.810800  437269 command_runner.go:130] > Certificate will not expire
	I1014 19:40:33.810892  437269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1014 19:40:33.846191  437269 command_runner.go:130] > Certificate will not expire
	I1014 19:40:33.846525  437269 kubeadm.go:400] StartCluster: {Name:functional-744288 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-744288 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 19:40:33.846626  437269 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 19:40:33.846701  437269 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 19:40:33.876026  437269 cri.go:89] found id: ""
	I1014 19:40:33.876095  437269 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 19:40:33.883772  437269 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1014 19:40:33.883800  437269 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1014 19:40:33.883806  437269 command_runner.go:130] > /var/lib/minikube/etcd:
	I1014 19:40:33.884383  437269 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1014 19:40:33.884404  437269 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1014 19:40:33.884457  437269 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1014 19:40:33.892144  437269 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1014 19:40:33.892232  437269 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-744288" does not appear in /home/jenkins/minikube-integration/21409-413763/kubeconfig
	I1014 19:40:33.892262  437269 kubeconfig.go:62] /home/jenkins/minikube-integration/21409-413763/kubeconfig needs updating (will repair): [kubeconfig missing "functional-744288" cluster setting kubeconfig missing "functional-744288" context setting]
	I1014 19:40:33.892554  437269 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/kubeconfig: {Name:mk8f52e50cf00a1f90024c40a36e49753857e33b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 19:40:33.893171  437269 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21409-413763/kubeconfig
	I1014 19:40:33.893322  437269 kapi.go:59] client config for functional-744288: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/client.crt", KeyFile:"/home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/client.key", CAFile:"/home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1014 19:40:33.893776  437269 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1014 19:40:33.893798  437269 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1014 19:40:33.893803  437269 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1014 19:40:33.893807  437269 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1014 19:40:33.893810  437269 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1014 19:40:33.893821  437269 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1014 19:40:33.894261  437269 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1014 19:40:33.902475  437269 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1014 19:40:33.902513  437269 kubeadm.go:601] duration metric: took 18.102158ms to restartPrimaryControlPlane
	I1014 19:40:33.902527  437269 kubeadm.go:402] duration metric: took 56.015342ms to StartCluster
	I1014 19:40:33.902549  437269 settings.go:142] acquiring lock: {Name:mke99e63954bc0385c76f9fa1a80091fa7740a6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 19:40:33.902670  437269 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21409-413763/kubeconfig
	I1014 19:40:33.903326  437269 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/kubeconfig: {Name:mk8f52e50cf00a1f90024c40a36e49753857e33b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 19:40:33.903559  437269 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 19:40:33.903636  437269 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1014 19:40:33.903763  437269 addons.go:69] Setting storage-provisioner=true in profile "functional-744288"
	I1014 19:40:33.903782  437269 config.go:182] Loaded profile config "functional-744288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 19:40:33.903793  437269 addons.go:69] Setting default-storageclass=true in profile "functional-744288"
	I1014 19:40:33.903791  437269 addons.go:238] Setting addon storage-provisioner=true in "functional-744288"
	I1014 19:40:33.903828  437269 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-744288"
	I1014 19:40:33.903863  437269 host.go:66] Checking if "functional-744288" exists ...
	I1014 19:40:33.904105  437269 cli_runner.go:164] Run: docker container inspect functional-744288 --format={{.State.Status}}
	I1014 19:40:33.904258  437269 cli_runner.go:164] Run: docker container inspect functional-744288 --format={{.State.Status}}
	I1014 19:40:33.906507  437269 out.go:179] * Verifying Kubernetes components...
	I1014 19:40:33.907562  437269 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 19:40:33.925699  437269 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21409-413763/kubeconfig
	I1014 19:40:33.925934  437269 kapi.go:59] client config for functional-744288: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/client.crt", KeyFile:"/home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/client.key", CAFile:"/home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1014 19:40:33.926358  437269 addons.go:238] Setting addon default-storageclass=true in "functional-744288"
	I1014 19:40:33.926409  437269 host.go:66] Checking if "functional-744288" exists ...
	I1014 19:40:33.926937  437269 cli_runner.go:164] Run: docker container inspect functional-744288 --format={{.State.Status}}
	I1014 19:40:33.928366  437269 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 19:40:33.930195  437269 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 19:40:33.930216  437269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1014 19:40:33.930272  437269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-744288
	I1014 19:40:33.952215  437269 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1014 19:40:33.952244  437269 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1014 19:40:33.952310  437269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-744288
	I1014 19:40:33.956857  437269 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/functional-744288/id_rsa Username:docker}
	I1014 19:40:33.971706  437269 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/functional-744288/id_rsa Username:docker}
	I1014 19:40:34.006948  437269 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 19:40:34.021044  437269 node_ready.go:35] waiting up to 6m0s for node "functional-744288" to be "Ready" ...
	I1014 19:40:34.021181  437269 type.go:168] "Request Body" body=""
	I1014 19:40:34.021246  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:34.021571  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:34.069169  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 19:40:34.082461  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1014 19:40:34.132558  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:40:34.132646  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:34.132686  437269 retry.go:31] will retry after 329.296623ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:34.141809  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:40:34.144515  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:34.144547  437269 retry.go:31] will retry after 261.501781ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:34.407171  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1014 19:40:34.461386  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:40:34.461450  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:34.461492  437269 retry.go:31] will retry after 293.495478ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:34.462464  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 19:40:34.513733  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:40:34.516544  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:34.516582  437269 retry.go:31] will retry after 480.429339ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:34.521783  437269 type.go:168] "Request Body" body=""
	I1014 19:40:34.521866  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:34.522176  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:34.755667  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1014 19:40:34.810676  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:40:34.810724  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:34.810744  437269 retry.go:31] will retry after 614.479011ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:34.998090  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 19:40:35.021962  437269 type.go:168] "Request Body" body=""
	I1014 19:40:35.022038  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:35.022373  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:35.049799  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:40:35.052676  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:35.052709  437269 retry.go:31] will retry after 432.01436ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:35.426352  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1014 19:40:35.482403  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:40:35.482455  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:35.482485  437269 retry.go:31] will retry after 1.057612851s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:35.485602  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 19:40:35.522076  437269 type.go:168] "Request Body" body=""
	I1014 19:40:35.522160  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:35.522499  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:35.537729  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:40:35.540612  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:35.540651  437269 retry.go:31] will retry after 1.151923723s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:36.021224  437269 type.go:168] "Request Body" body=""
	I1014 19:40:36.021306  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:36.021677  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:40:36.021751  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:40:36.521540  437269 type.go:168] "Request Body" body=""
	I1014 19:40:36.521648  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:36.522005  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:36.541250  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1014 19:40:36.596277  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:40:36.596343  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:36.596366  437269 retry.go:31] will retry after 858.341252ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:36.693590  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 19:40:36.746070  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:40:36.749114  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:36.749145  437269 retry.go:31] will retry after 1.225575657s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:37.021547  437269 type.go:168] "Request Body" body=""
	I1014 19:40:37.021641  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:37.022054  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:37.455821  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1014 19:40:37.511587  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:40:37.511647  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:37.511676  437269 retry.go:31] will retry after 1.002490371s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:37.521830  437269 type.go:168] "Request Body" body=""
	I1014 19:40:37.521912  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:37.522269  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:37.974939  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 19:40:38.021626  437269 type.go:168] "Request Body" body=""
	I1014 19:40:38.021748  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:38.022116  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:40:38.022184  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:40:38.027734  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:40:38.030470  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:38.030507  437269 retry.go:31] will retry after 1.025461199s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:38.515193  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1014 19:40:38.521814  437269 type.go:168] "Request Body" body=""
	I1014 19:40:38.521914  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:38.522290  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:38.567735  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:40:38.570434  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:38.570473  437269 retry.go:31] will retry after 1.83061983s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:39.022158  437269 type.go:168] "Request Body" body=""
	I1014 19:40:39.022254  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:39.022656  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:39.056879  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 19:40:39.109896  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:40:39.112847  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:39.112884  437269 retry.go:31] will retry after 3.104822489s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:39.521355  437269 type.go:168] "Request Body" body=""
	I1014 19:40:39.521439  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:39.521856  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:40.021689  437269 type.go:168] "Request Body" body=""
	I1014 19:40:40.021785  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:40.022244  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:40:40.022320  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:40:40.401833  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1014 19:40:40.453343  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:40:40.456347  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:40.456387  437269 retry.go:31] will retry after 3.646877865s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:40.521651  437269 type.go:168] "Request Body" body=""
	I1014 19:40:40.521728  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:40.522111  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:41.021801  437269 type.go:168] "Request Body" body=""
	I1014 19:40:41.021897  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:41.022239  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:41.521918  437269 type.go:168] "Request Body" body=""
	I1014 19:40:41.522016  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:41.522380  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:42.022132  437269 type.go:168] "Request Body" body=""
	I1014 19:40:42.022218  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:42.022586  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:40:42.022649  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:40:42.217895  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 19:40:42.273119  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:40:42.273178  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:42.273199  437269 retry.go:31] will retry after 5.13792128s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:42.521564  437269 type.go:168] "Request Body" body=""
	I1014 19:40:42.521722  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:42.522122  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:43.022026  437269 type.go:168] "Request Body" body=""
	I1014 19:40:43.022112  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:43.022464  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:43.521291  437269 type.go:168] "Request Body" body=""
	I1014 19:40:43.521385  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:43.521849  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:44.021813  437269 type.go:168] "Request Body" body=""
	I1014 19:40:44.021907  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:44.022272  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:44.103502  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1014 19:40:44.156724  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:40:44.159470  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:44.159502  437269 retry.go:31] will retry after 6.372961743s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:44.522197  437269 type.go:168] "Request Body" body=""
	I1014 19:40:44.522316  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:44.522799  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:40:44.522878  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:40:45.021683  437269 type.go:168] "Request Body" body=""
	I1014 19:40:45.021776  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:45.022120  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:45.521709  437269 type.go:168] "Request Body" body=""
	I1014 19:40:45.521833  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:45.522209  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:46.021967  437269 type.go:168] "Request Body" body=""
	I1014 19:40:46.022064  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:46.022441  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:46.522085  437269 type.go:168] "Request Body" body=""
	I1014 19:40:46.522181  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:46.522556  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:47.022210  437269 type.go:168] "Request Body" body=""
	I1014 19:40:47.022296  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:47.022645  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:40:47.022716  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:40:47.412207  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 19:40:47.466705  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:40:47.466772  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:47.466800  437269 retry.go:31] will retry after 6.31356698s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:47.521972  437269 type.go:168] "Request Body" body=""
	I1014 19:40:47.522061  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:47.522426  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:48.022131  437269 type.go:168] "Request Body" body=""
	I1014 19:40:48.022208  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:48.022593  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:48.522267  437269 type.go:168] "Request Body" body=""
	I1014 19:40:48.522351  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:48.522727  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:49.021317  437269 type.go:168] "Request Body" body=""
	I1014 19:40:49.021410  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:49.021831  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:49.521375  437269 type.go:168] "Request Body" body=""
	I1014 19:40:49.521474  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:49.521884  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:40:49.521959  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:40:50.021803  437269 type.go:168] "Request Body" body=""
	I1014 19:40:50.021896  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:50.022319  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:50.521972  437269 type.go:168] "Request Body" body=""
	I1014 19:40:50.522068  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:50.522461  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:50.533648  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1014 19:40:50.590568  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:40:50.590621  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:50.590649  437269 retry.go:31] will retry after 8.10133009s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:51.022238  437269 type.go:168] "Request Body" body=""
	I1014 19:40:51.022324  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:51.022671  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:51.521259  437269 type.go:168] "Request Body" body=""
	I1014 19:40:51.521354  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:51.521737  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:52.021339  437269 type.go:168] "Request Body" body=""
	I1014 19:40:52.021436  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:52.021838  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:40:52.021911  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:40:52.521431  437269 type.go:168] "Request Body" body=""
	I1014 19:40:52.521523  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:52.521914  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:53.021515  437269 type.go:168] "Request Body" body=""
	I1014 19:40:53.021632  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:53.022015  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:53.521582  437269 type.go:168] "Request Body" body=""
	I1014 19:40:53.521689  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:53.522061  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:53.781554  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 19:40:53.838039  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:40:53.838101  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:53.838128  437269 retry.go:31] will retry after 9.837531091s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:54.021666  437269 type.go:168] "Request Body" body=""
	I1014 19:40:54.021771  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:54.022166  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:40:54.022235  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:40:54.521778  437269 type.go:168] "Request Body" body=""
	I1014 19:40:54.521864  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:54.522222  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:55.022074  437269 type.go:168] "Request Body" body=""
	I1014 19:40:55.022163  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:55.022522  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:55.522140  437269 type.go:168] "Request Body" body=""
	I1014 19:40:55.522219  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:55.522653  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:56.021265  437269 type.go:168] "Request Body" body=""
	I1014 19:40:56.021344  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:56.021726  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:56.521342  437269 type.go:168] "Request Body" body=""
	I1014 19:40:56.521439  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:56.521872  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:40:56.521945  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:40:57.021424  437269 type.go:168] "Request Body" body=""
	I1014 19:40:57.021552  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:57.021974  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:57.521651  437269 type.go:168] "Request Body" body=""
	I1014 19:40:57.521797  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:57.522216  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:58.021903  437269 type.go:168] "Request Body" body=""
	I1014 19:40:58.021987  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:58.022398  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:58.522085  437269 type.go:168] "Request Body" body=""
	I1014 19:40:58.522169  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:58.522556  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:40:58.522630  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:40:58.692921  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1014 19:40:58.746193  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:40:58.749262  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:58.749295  437269 retry.go:31] will retry after 17.735335575s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:59.021769  437269 type.go:168] "Request Body" body=""
	I1014 19:40:59.021862  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:59.022229  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:59.521888  437269 type.go:168] "Request Body" body=""
	I1014 19:40:59.522001  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:59.522349  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:00.021702  437269 type.go:168] "Request Body" body=""
	I1014 19:41:00.021801  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:00.022202  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:00.522173  437269 type.go:168] "Request Body" body=""
	I1014 19:41:00.522273  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:00.522632  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:00.522721  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:01.021455  437269 type.go:168] "Request Body" body=""
	I1014 19:41:01.021548  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:01.021937  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:01.521738  437269 type.go:168] "Request Body" body=""
	I1014 19:41:01.521858  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:01.522279  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:02.022194  437269 type.go:168] "Request Body" body=""
	I1014 19:41:02.022289  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:02.022725  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:02.521517  437269 type.go:168] "Request Body" body=""
	I1014 19:41:02.521656  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:02.522050  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:03.021919  437269 type.go:168] "Request Body" body=""
	I1014 19:41:03.022009  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:03.022403  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:03.022475  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:03.522212  437269 type.go:168] "Request Body" body=""
	I1014 19:41:03.522291  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:03.522659  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:03.675962  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 19:41:03.727887  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:41:03.730521  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:41:03.730562  437269 retry.go:31] will retry after 19.438885547s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:41:04.022253  437269 type.go:168] "Request Body" body=""
	I1014 19:41:04.022379  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:04.022809  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:04.521663  437269 type.go:168] "Request Body" body=""
	I1014 19:41:04.521794  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:04.522180  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:05.021978  437269 type.go:168] "Request Body" body=""
	I1014 19:41:05.022063  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:05.022412  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:05.522231  437269 type.go:168] "Request Body" body=""
	I1014 19:41:05.522314  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:05.522655  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:05.522732  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:06.021349  437269 type.go:168] "Request Body" body=""
	I1014 19:41:06.021429  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:06.021828  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:06.521569  437269 type.go:168] "Request Body" body=""
	I1014 19:41:06.521651  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:06.522040  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:07.021907  437269 type.go:168] "Request Body" body=""
	I1014 19:41:07.021993  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:07.022361  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:07.522243  437269 type.go:168] "Request Body" body=""
	I1014 19:41:07.522333  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:07.522720  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:07.522816  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:08.021308  437269 type.go:168] "Request Body" body=""
	I1014 19:41:08.021386  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:08.021750  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:08.521638  437269 type.go:168] "Request Body" body=""
	I1014 19:41:08.521722  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:08.522125  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:09.021981  437269 type.go:168] "Request Body" body=""
	I1014 19:41:09.022069  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:09.022464  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:09.521240  437269 type.go:168] "Request Body" body=""
	I1014 19:41:09.521389  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:09.521793  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:10.021609  437269 type.go:168] "Request Body" body=""
	I1014 19:41:10.021695  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:10.022108  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:10.022177  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:10.522050  437269 type.go:168] "Request Body" body=""
	I1014 19:41:10.522140  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:10.522549  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:11.021354  437269 type.go:168] "Request Body" body=""
	I1014 19:41:11.021435  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:11.021862  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:11.521641  437269 type.go:168] "Request Body" body=""
	I1014 19:41:11.521740  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:11.522168  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:12.022028  437269 type.go:168] "Request Body" body=""
	I1014 19:41:12.022114  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:12.022483  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:12.022549  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:12.521254  437269 type.go:168] "Request Body" body=""
	I1014 19:41:12.521342  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:12.521740  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:13.021557  437269 type.go:168] "Request Body" body=""
	I1014 19:41:13.021642  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:13.022039  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:13.521864  437269 type.go:168] "Request Body" body=""
	I1014 19:41:13.521953  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:13.522323  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:14.022194  437269 type.go:168] "Request Body" body=""
	I1014 19:41:14.022287  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:14.022654  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:14.022724  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:14.521434  437269 type.go:168] "Request Body" body=""
	I1014 19:41:14.521526  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:14.521992  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:15.021751  437269 type.go:168] "Request Body" body=""
	I1014 19:41:15.021849  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:15.022211  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:15.522050  437269 type.go:168] "Request Body" body=""
	I1014 19:41:15.522133  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:15.522522  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:16.021287  437269 type.go:168] "Request Body" body=""
	I1014 19:41:16.021373  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:16.021781  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:16.485413  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1014 19:41:16.522201  437269 type.go:168] "Request Body" body=""
	I1014 19:41:16.522279  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:16.522623  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:16.522694  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:16.537285  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:41:16.540211  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:41:16.540239  437269 retry.go:31] will retry after 23.522391633s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:41:17.021909  437269 type.go:168] "Request Body" body=""
	I1014 19:41:17.022015  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:17.022407  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:17.522283  437269 type.go:168] "Request Body" body=""
	I1014 19:41:17.522380  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:17.522743  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:18.021576  437269 type.go:168] "Request Body" body=""
	I1014 19:41:18.021671  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:18.022118  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:18.522003  437269 type.go:168] "Request Body" body=""
	I1014 19:41:18.522089  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:18.522516  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:19.021291  437269 type.go:168] "Request Body" body=""
	I1014 19:41:19.021372  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:19.021747  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:19.021855  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:19.521591  437269 type.go:168] "Request Body" body=""
	I1014 19:41:19.521674  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:19.522078  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:20.021898  437269 type.go:168] "Request Body" body=""
	I1014 19:41:20.021987  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:20.022480  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:20.521321  437269 type.go:168] "Request Body" body=""
	I1014 19:41:20.521403  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:20.521841  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:21.021619  437269 type.go:168] "Request Body" body=""
	I1014 19:41:21.021721  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:21.022173  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:21.022242  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:21.522084  437269 type.go:168] "Request Body" body=""
	I1014 19:41:21.522176  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:21.522550  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:22.021344  437269 type.go:168] "Request Body" body=""
	I1014 19:41:22.021423  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:22.021877  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:22.521680  437269 type.go:168] "Request Body" body=""
	I1014 19:41:22.521784  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:22.522158  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:23.022009  437269 type.go:168] "Request Body" body=""
	I1014 19:41:23.022088  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:23.022489  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:23.022557  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:23.169796  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 19:41:23.227015  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:41:23.227096  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:41:23.227121  437269 retry.go:31] will retry after 24.705053737s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:41:23.521443  437269 type.go:168] "Request Body" body=""
	I1014 19:41:23.521533  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:23.522057  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:24.021980  437269 type.go:168] "Request Body" body=""
	I1014 19:41:24.022087  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:24.022457  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:24.522136  437269 type.go:168] "Request Body" body=""
	I1014 19:41:24.522235  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:24.522578  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:25.021598  437269 type.go:168] "Request Body" body=""
	I1014 19:41:25.021741  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:25.022116  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:25.521746  437269 type.go:168] "Request Body" body=""
	I1014 19:41:25.521865  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:25.522300  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:25.522363  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:26.021980  437269 type.go:168] "Request Body" body=""
	I1014 19:41:26.022056  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:26.022462  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:26.522116  437269 type.go:168] "Request Body" body=""
	I1014 19:41:26.522205  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:26.522581  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:27.022289  437269 type.go:168] "Request Body" body=""
	I1014 19:41:27.022379  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:27.022735  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:27.521368  437269 type.go:168] "Request Body" body=""
	I1014 19:41:27.521454  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:27.521879  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:28.021445  437269 type.go:168] "Request Body" body=""
	I1014 19:41:28.021545  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:28.021931  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:28.021996  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:28.521541  437269 type.go:168] "Request Body" body=""
	I1014 19:41:28.521630  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:28.522060  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:29.021664  437269 type.go:168] "Request Body" body=""
	I1014 19:41:29.021774  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:29.022227  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:29.521894  437269 type.go:168] "Request Body" body=""
	I1014 19:41:29.521983  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:29.522351  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:30.022245  437269 type.go:168] "Request Body" body=""
	I1014 19:41:30.022327  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:30.022707  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:30.022824  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:30.521424  437269 type.go:168] "Request Body" body=""
	I1014 19:41:30.521529  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:30.521982  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:31.021342  437269 type.go:168] "Request Body" body=""
	I1014 19:41:31.021429  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:31.021899  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:31.521503  437269 type.go:168] "Request Body" body=""
	I1014 19:41:31.521595  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:31.522014  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:32.021616  437269 type.go:168] "Request Body" body=""
	I1014 19:41:32.021705  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:32.022095  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:32.521679  437269 type.go:168] "Request Body" body=""
	I1014 19:41:32.521783  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:32.522156  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:32.522231  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:33.021778  437269 type.go:168] "Request Body" body=""
	I1014 19:41:33.021859  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:33.022214  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:33.521935  437269 type.go:168] "Request Body" body=""
	I1014 19:41:33.522024  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:33.522446  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:34.021233  437269 type.go:168] "Request Body" body=""
	I1014 19:41:34.021316  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:34.021702  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:34.521364  437269 type.go:168] "Request Body" body=""
	I1014 19:41:34.521444  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:34.521880  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:35.021696  437269 type.go:168] "Request Body" body=""
	I1014 19:41:35.021799  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:35.022177  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:35.022244  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:35.521929  437269 type.go:168] "Request Body" body=""
	I1014 19:41:35.522017  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:35.522385  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:36.022241  437269 type.go:168] "Request Body" body=""
	I1014 19:41:36.022330  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:36.022808  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:36.521609  437269 type.go:168] "Request Body" body=""
	I1014 19:41:36.521699  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:36.522099  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:37.021877  437269 type.go:168] "Request Body" body=""
	I1014 19:41:37.021957  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:37.022344  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:37.022414  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:37.522189  437269 type.go:168] "Request Body" body=""
	I1014 19:41:37.522279  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:37.522617  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:38.021362  437269 type.go:168] "Request Body" body=""
	I1014 19:41:38.021440  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:38.021855  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:38.521628  437269 type.go:168] "Request Body" body=""
	I1014 19:41:38.521722  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:38.522097  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:39.021917  437269 type.go:168] "Request Body" body=""
	I1014 19:41:39.022012  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:39.022384  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:39.022447  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:39.522314  437269 type.go:168] "Request Body" body=""
	I1014 19:41:39.522401  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:39.522788  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:40.021745  437269 type.go:168] "Request Body" body=""
	I1014 19:41:40.021857  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:40.022236  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:40.063502  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1014 19:41:40.119488  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:41:40.119566  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:41:40.119604  437269 retry.go:31] will retry after 34.554126144s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:41:40.522218  437269 type.go:168] "Request Body" body=""
	I1014 19:41:40.522383  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:40.522878  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:41.021513  437269 type.go:168] "Request Body" body=""
	I1014 19:41:41.021597  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:41.021974  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:41.521785  437269 type.go:168] "Request Body" body=""
	I1014 19:41:41.521864  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:41.522250  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:41.522330  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:42.022203  437269 type.go:168] "Request Body" body=""
	I1014 19:41:42.022322  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:42.022810  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:42.521587  437269 type.go:168] "Request Body" body=""
	I1014 19:41:42.521669  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:42.522059  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:43.021981  437269 type.go:168] "Request Body" body=""
	I1014 19:41:43.022074  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:43.022442  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:43.521224  437269 type.go:168] "Request Body" body=""
	I1014 19:41:43.521304  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:43.521705  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:44.021370  437269 type.go:168] "Request Body" body=""
	I1014 19:41:44.021454  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:44.021888  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:44.021956  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:44.521703  437269 type.go:168] "Request Body" body=""
	I1014 19:41:44.521821  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:44.522229  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:45.022076  437269 type.go:168] "Request Body" body=""
	I1014 19:41:45.022158  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:45.022500  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:45.521283  437269 type.go:168] "Request Body" body=""
	I1014 19:41:45.521372  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:45.521787  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:46.021585  437269 type.go:168] "Request Body" body=""
	I1014 19:41:46.021687  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:46.022067  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:46.022144  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:46.521959  437269 type.go:168] "Request Body" body=""
	I1014 19:41:46.522047  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:46.522400  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:47.022244  437269 type.go:168] "Request Body" body=""
	I1014 19:41:47.022326  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:47.022720  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:47.521502  437269 type.go:168] "Request Body" body=""
	I1014 19:41:47.521586  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:47.521971  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:47.932453  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 19:41:47.984361  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:41:47.987254  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:41:47.987292  437269 retry.go:31] will retry after 37.673790461s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:41:48.021563  437269 type.go:168] "Request Body" body=""
	I1014 19:41:48.021661  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:48.022072  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:48.521661  437269 type.go:168] "Request Body" body=""
	I1014 19:41:48.521746  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:48.522153  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:48.522222  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:49.021778  437269 type.go:168] "Request Body" body=""
	I1014 19:41:49.021869  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:49.022246  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:49.521919  437269 type.go:168] "Request Body" body=""
	I1014 19:41:49.521999  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:49.522366  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:50.021911  437269 type.go:168] "Request Body" body=""
	I1014 19:41:50.021996  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:50.022358  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:50.522021  437269 type.go:168] "Request Body" body=""
	I1014 19:41:50.522121  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:50.522513  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:50.522647  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:51.022257  437269 type.go:168] "Request Body" body=""
	I1014 19:41:51.022355  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:51.022711  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:51.521301  437269 type.go:168] "Request Body" body=""
	I1014 19:41:51.521377  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:51.521820  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:52.021365  437269 type.go:168] "Request Body" body=""
	I1014 19:41:52.021447  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:52.021844  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:52.521373  437269 type.go:168] "Request Body" body=""
	I1014 19:41:52.521451  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:52.521825  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:53.021413  437269 type.go:168] "Request Body" body=""
	I1014 19:41:53.021513  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:53.021940  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:53.022029  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:53.521560  437269 type.go:168] "Request Body" body=""
	I1014 19:41:53.521663  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:53.522072  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:54.021872  437269 type.go:168] "Request Body" body=""
	I1014 19:41:54.021964  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:54.022312  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:54.521983  437269 type.go:168] "Request Body" body=""
	I1014 19:41:54.522067  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:54.522484  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:55.021263  437269 type.go:168] "Request Body" body=""
	I1014 19:41:55.021357  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:55.021747  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:55.521288  437269 type.go:168] "Request Body" body=""
	I1014 19:41:55.521376  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:55.521739  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:55.521840  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:56.021322  437269 type.go:168] "Request Body" body=""
	I1014 19:41:56.021409  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:56.021840  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:56.521370  437269 type.go:168] "Request Body" body=""
	I1014 19:41:56.521452  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:56.521831  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:57.021963  437269 type.go:168] "Request Body" body=""
	I1014 19:41:57.022041  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:57.022397  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:57.522061  437269 type.go:168] "Request Body" body=""
	I1014 19:41:57.522137  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:57.522480  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:57.522553  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:58.022151  437269 type.go:168] "Request Body" body=""
	I1014 19:41:58.022236  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:58.022597  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:58.522240  437269 type.go:168] "Request Body" body=""
	I1014 19:41:58.522322  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:58.522668  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:59.021247  437269 type.go:168] "Request Body" body=""
	I1014 19:41:59.021333  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:59.021717  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:59.521251  437269 type.go:168] "Request Body" body=""
	I1014 19:41:59.521330  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:59.521703  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:00.021653  437269 type.go:168] "Request Body" body=""
	I1014 19:42:00.021752  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:00.022142  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:00.022220  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:00.522036  437269 type.go:168] "Request Body" body=""
	I1014 19:42:00.522123  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:00.522466  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:01.022199  437269 type.go:168] "Request Body" body=""
	I1014 19:42:01.022290  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:01.022633  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:01.521196  437269 type.go:168] "Request Body" body=""
	I1014 19:42:01.521278  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:01.521637  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:02.022258  437269 type.go:168] "Request Body" body=""
	I1014 19:42:02.022335  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:02.022740  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:02.022848  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:02.521321  437269 type.go:168] "Request Body" body=""
	I1014 19:42:02.521405  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:02.521800  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:03.021313  437269 type.go:168] "Request Body" body=""
	I1014 19:42:03.021392  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:03.021749  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:03.521348  437269 type.go:168] "Request Body" body=""
	I1014 19:42:03.521443  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:03.521938  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:04.021944  437269 type.go:168] "Request Body" body=""
	I1014 19:42:04.022035  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:04.022414  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:04.522132  437269 type.go:168] "Request Body" body=""
	I1014 19:42:04.522227  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:04.522582  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:04.522653  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:05.021481  437269 type.go:168] "Request Body" body=""
	I1014 19:42:05.021561  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:05.021905  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:05.521556  437269 type.go:168] "Request Body" body=""
	I1014 19:42:05.521637  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:05.522027  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:06.021613  437269 type.go:168] "Request Body" body=""
	I1014 19:42:06.021699  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:06.022057  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:06.521633  437269 type.go:168] "Request Body" body=""
	I1014 19:42:06.521719  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:06.522075  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:07.021749  437269 type.go:168] "Request Body" body=""
	I1014 19:42:07.021848  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:07.022194  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:07.022260  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:07.521871  437269 type.go:168] "Request Body" body=""
	I1014 19:42:07.521957  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:07.522300  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:08.021955  437269 type.go:168] "Request Body" body=""
	I1014 19:42:08.022031  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:08.022379  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:08.522039  437269 type.go:168] "Request Body" body=""
	I1014 19:42:08.522117  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:08.522476  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:09.022164  437269 type.go:168] "Request Body" body=""
	I1014 19:42:09.022254  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:09.022634  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:09.022701  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:09.521239  437269 type.go:168] "Request Body" body=""
	I1014 19:42:09.521333  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:09.521715  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:10.021732  437269 type.go:168] "Request Body" body=""
	I1014 19:42:10.021859  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:10.022260  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:10.521865  437269 type.go:168] "Request Body" body=""
	I1014 19:42:10.521952  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:10.522296  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:11.021963  437269 type.go:168] "Request Body" body=""
	I1014 19:42:11.022051  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:11.022419  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:11.522129  437269 type.go:168] "Request Body" body=""
	I1014 19:42:11.522219  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:11.522604  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:11.522681  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:12.022256  437269 type.go:168] "Request Body" body=""
	I1014 19:42:12.022343  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:12.022700  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:12.521278  437269 type.go:168] "Request Body" body=""
	I1014 19:42:12.521359  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:12.521732  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:13.022114  437269 type.go:168] "Request Body" body=""
	I1014 19:42:13.022198  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:13.022561  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:13.522240  437269 type.go:168] "Request Body" body=""
	I1014 19:42:13.522319  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:13.522711  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:13.522798  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:14.021579  437269 type.go:168] "Request Body" body=""
	I1014 19:42:14.021707  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:14.022154  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:14.521710  437269 type.go:168] "Request Body" body=""
	I1014 19:42:14.521880  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:14.522225  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:14.674573  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1014 19:42:14.729085  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:42:14.729138  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:42:14.729273  437269 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1014 19:42:15.021737  437269 type.go:168] "Request Body" body=""
	I1014 19:42:15.021834  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:15.022205  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:15.521930  437269 type.go:168] "Request Body" body=""
	I1014 19:42:15.522012  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:15.522372  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:16.022056  437269 type.go:168] "Request Body" body=""
	I1014 19:42:16.022143  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:16.022542  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:16.022609  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:16.522173  437269 type.go:168] "Request Body" body=""
	I1014 19:42:16.522253  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:16.522604  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:17.021294  437269 type.go:168] "Request Body" body=""
	I1014 19:42:17.021370  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:17.021733  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:17.521444  437269 type.go:168] "Request Body" body=""
	I1014 19:42:17.521548  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:17.521910  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:18.022124  437269 type.go:168] "Request Body" body=""
	I1014 19:42:18.022209  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:18.022551  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:18.022636  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:18.522199  437269 type.go:168] "Request Body" body=""
	I1014 19:42:18.522276  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:18.522605  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:19.022258  437269 type.go:168] "Request Body" body=""
	I1014 19:42:19.022337  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:19.022731  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:19.521317  437269 type.go:168] "Request Body" body=""
	I1014 19:42:19.521448  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:19.521836  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:20.021610  437269 type.go:168] "Request Body" body=""
	I1014 19:42:20.021710  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:20.022103  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:20.521709  437269 type.go:168] "Request Body" body=""
	I1014 19:42:20.521810  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:20.522173  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:20.522240  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:21.021782  437269 type.go:168] "Request Body" body=""
	I1014 19:42:21.021881  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:21.022300  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:21.521996  437269 type.go:168] "Request Body" body=""
	I1014 19:42:21.522075  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:21.522493  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:22.022092  437269 type.go:168] "Request Body" body=""
	I1014 19:42:22.022170  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:22.022570  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:22.522183  437269 type.go:168] "Request Body" body=""
	I1014 19:42:22.522272  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:22.522625  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:22.522688  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:23.021971  437269 type.go:168] "Request Body" body=""
	I1014 19:42:23.022063  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:23.022422  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:23.522081  437269 type.go:168] "Request Body" body=""
	I1014 19:42:23.522162  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:23.522509  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:24.022288  437269 type.go:168] "Request Body" body=""
	I1014 19:42:24.022385  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:24.022833  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:24.521351  437269 type.go:168] "Request Body" body=""
	I1014 19:42:24.521424  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:24.521791  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:25.021730  437269 type.go:168] "Request Body" body=""
	I1014 19:42:25.021831  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:25.022212  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:25.022288  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:25.521848  437269 type.go:168] "Request Body" body=""
	I1014 19:42:25.521952  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:25.522288  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:25.661672  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 19:42:25.715017  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:42:25.717809  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:42:25.717938  437269 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1014 19:42:25.719888  437269 out.go:179] * Enabled addons: 
	I1014 19:42:25.722455  437269 addons.go:514] duration metric: took 1m51.818834592s for enable addons: enabled=[]
	I1014 19:42:26.021269  437269 type.go:168] "Request Body" body=""
	I1014 19:42:26.021349  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:26.021816  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:26.521369  437269 type.go:168] "Request Body" body=""
	I1014 19:42:26.521477  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:26.521916  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:27.021507  437269 type.go:168] "Request Body" body=""
	I1014 19:42:27.021605  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:27.021991  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:27.521602  437269 type.go:168] "Request Body" body=""
	I1014 19:42:27.521721  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:27.522084  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:27.522146  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:28.021642  437269 type.go:168] "Request Body" body=""
	I1014 19:42:28.021743  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:28.022116  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:28.521702  437269 type.go:168] "Request Body" body=""
	I1014 19:42:28.521807  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:28.522163  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:29.021797  437269 type.go:168] "Request Body" body=""
	I1014 19:42:29.021903  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:29.022267  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:29.522074  437269 type.go:168] "Request Body" body=""
	I1014 19:42:29.522173  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:29.522553  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:29.522671  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:30.021560  437269 type.go:168] "Request Body" body=""
	I1014 19:42:30.021654  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:30.022115  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:30.521649  437269 type.go:168] "Request Body" body=""
	I1014 19:42:30.521743  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:30.522178  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:31.021725  437269 type.go:168] "Request Body" body=""
	I1014 19:42:31.021826  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:31.022186  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:31.521880  437269 type.go:168] "Request Body" body=""
	I1014 19:42:31.521996  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:31.522379  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:32.021983  437269 type.go:168] "Request Body" body=""
	I1014 19:42:32.022060  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:32.022435  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:32.022510  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:32.522077  437269 type.go:168] "Request Body" body=""
	I1014 19:42:32.522170  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:32.522524  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:33.022165  437269 type.go:168] "Request Body" body=""
	I1014 19:42:33.022248  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:33.022592  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:33.521797  437269 type.go:168] "Request Body" body=""
	I1014 19:42:33.522204  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:33.522657  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:34.021345  437269 type.go:168] "Request Body" body=""
	I1014 19:42:34.021435  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:34.021864  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:34.521442  437269 type.go:168] "Request Body" body=""
	I1014 19:42:34.521536  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:34.521932  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:34.522018  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:35.021950  437269 type.go:168] "Request Body" body=""
	I1014 19:42:35.022028  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:35.022451  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:35.521247  437269 type.go:168] "Request Body" body=""
	I1014 19:42:35.521354  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:35.521837  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:36.021379  437269 type.go:168] "Request Body" body=""
	I1014 19:42:36.021471  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:36.021855  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:36.521476  437269 type.go:168] "Request Body" body=""
	I1014 19:42:36.521569  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:36.521989  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:36.522059  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:37.021550  437269 type.go:168] "Request Body" body=""
	I1014 19:42:37.021627  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:37.022016  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:37.521641  437269 type.go:168] "Request Body" body=""
	I1014 19:42:37.521743  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:37.522187  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:38.021859  437269 type.go:168] "Request Body" body=""
	I1014 19:42:38.021939  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:38.022324  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:38.521989  437269 type.go:168] "Request Body" body=""
	I1014 19:42:38.522080  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:38.522434  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:38.522503  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:39.022081  437269 type.go:168] "Request Body" body=""
	I1014 19:42:39.022165  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:39.022503  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:39.522189  437269 type.go:168] "Request Body" body=""
	I1014 19:42:39.522287  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:39.522650  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:40.021651  437269 type.go:168] "Request Body" body=""
	I1014 19:42:40.021735  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:40.022128  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:40.521658  437269 type.go:168] "Request Body" body=""
	I1014 19:42:40.521778  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:40.522143  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:41.021691  437269 type.go:168] "Request Body" body=""
	I1014 19:42:41.021793  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:41.022157  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:41.022225  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:41.521808  437269 type.go:168] "Request Body" body=""
	I1014 19:42:41.521901  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:41.522267  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:42.021874  437269 type.go:168] "Request Body" body=""
	I1014 19:42:42.021955  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:42.022329  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:42.521975  437269 type.go:168] "Request Body" body=""
	I1014 19:42:42.522059  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:42.522405  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:43.022032  437269 type.go:168] "Request Body" body=""
	I1014 19:42:43.022120  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:43.022486  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:43.022552  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:43.522253  437269 type.go:168] "Request Body" body=""
	I1014 19:42:43.522342  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:43.522709  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:44.021548  437269 type.go:168] "Request Body" body=""
	I1014 19:42:44.021646  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:44.022079  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:44.521677  437269 type.go:168] "Request Body" body=""
	I1014 19:42:44.521784  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:44.522202  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:45.022110  437269 type.go:168] "Request Body" body=""
	I1014 19:42:45.022196  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:45.022558  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:45.022661  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:45.522180  437269 type.go:168] "Request Body" body=""
	I1014 19:42:45.522266  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:45.522677  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:46.021229  437269 type.go:168] "Request Body" body=""
	I1014 19:42:46.021324  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:46.021716  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:46.521270  437269 type.go:168] "Request Body" body=""
	I1014 19:42:46.521348  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:46.521722  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:47.021311  437269 type.go:168] "Request Body" body=""
	I1014 19:42:47.021390  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:47.021779  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:47.521354  437269 type.go:168] "Request Body" body=""
	I1014 19:42:47.521433  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:47.521823  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:47.521900  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:48.021360  437269 type.go:168] "Request Body" body=""
	I1014 19:42:48.021443  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:48.021837  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:48.521366  437269 type.go:168] "Request Body" body=""
	I1014 19:42:48.521469  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:48.521854  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:49.022003  437269 type.go:168] "Request Body" body=""
	I1014 19:42:49.022085  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:49.022428  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:49.522046  437269 type.go:168] "Request Body" body=""
	I1014 19:42:49.522124  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:49.522478  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:49.522562  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:50.021433  437269 type.go:168] "Request Body" body=""
	I1014 19:42:50.021542  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:50.021987  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:50.521590  437269 type.go:168] "Request Body" body=""
	I1014 19:42:50.521671  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:50.521991  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:51.021671  437269 type.go:168] "Request Body" body=""
	I1014 19:42:51.021778  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:51.022149  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:51.521719  437269 type.go:168] "Request Body" body=""
	I1014 19:42:51.521832  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:51.522215  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:52.021893  437269 type.go:168] "Request Body" body=""
	I1014 19:42:52.021987  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:52.022342  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:52.022411  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:52.522080  437269 type.go:168] "Request Body" body=""
	I1014 19:42:52.522183  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:52.522617  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:53.022238  437269 type.go:168] "Request Body" body=""
	I1014 19:42:53.022323  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:53.022716  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:53.521304  437269 type.go:168] "Request Body" body=""
	I1014 19:42:53.521390  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:53.521854  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:54.021685  437269 type.go:168] "Request Body" body=""
	I1014 19:42:54.021789  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:54.022166  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:54.521747  437269 type.go:168] "Request Body" body=""
	I1014 19:42:54.521851  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:54.522275  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:54.522352  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:55.022087  437269 type.go:168] "Request Body" body=""
	I1014 19:42:55.022177  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:55.022557  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:55.522187  437269 type.go:168] "Request Body" body=""
	I1014 19:42:55.522285  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:55.522718  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:56.021281  437269 type.go:168] "Request Body" body=""
	I1014 19:42:56.021383  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:56.021840  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:56.521354  437269 type.go:168] "Request Body" body=""
	I1014 19:42:56.521430  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:56.521815  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:57.021386  437269 type.go:168] "Request Body" body=""
	I1014 19:42:57.021483  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:57.021914  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:57.021999  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:57.521600  437269 type.go:168] "Request Body" body=""
	I1014 19:42:57.521687  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:57.522087  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:58.021700  437269 type.go:168] "Request Body" body=""
	I1014 19:42:58.021799  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:58.022207  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:58.521870  437269 type.go:168] "Request Body" body=""
	I1014 19:42:58.521949  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:58.522303  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:59.021970  437269 type.go:168] "Request Body" body=""
	I1014 19:42:59.022045  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:59.022443  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:59.022507  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:59.522038  437269 type.go:168] "Request Body" body=""
	I1014 19:42:59.522131  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:59.522484  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:00.021506  437269 type.go:168] "Request Body" body=""
	I1014 19:43:00.021597  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:00.021981  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:00.521539  437269 type.go:168] "Request Body" body=""
	I1014 19:43:00.521625  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:00.522005  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:01.021567  437269 type.go:168] "Request Body" body=""
	I1014 19:43:01.021646  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:01.022034  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:01.521607  437269 type.go:168] "Request Body" body=""
	I1014 19:43:01.521699  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:01.522086  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:01.522169  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:02.021674  437269 type.go:168] "Request Body" body=""
	I1014 19:43:02.021771  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:02.022118  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:02.521701  437269 type.go:168] "Request Body" body=""
	I1014 19:43:02.521802  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:02.522123  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:03.021671  437269 type.go:168] "Request Body" body=""
	I1014 19:43:03.021748  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:03.022117  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:03.521807  437269 type.go:168] "Request Body" body=""
	I1014 19:43:03.521898  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:03.522297  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:03.522377  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:04.021247  437269 type.go:168] "Request Body" body=""
	I1014 19:43:04.021333  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:04.021730  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:04.521290  437269 type.go:168] "Request Body" body=""
	I1014 19:43:04.521389  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:04.521814  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:05.021660  437269 type.go:168] "Request Body" body=""
	I1014 19:43:05.021743  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:05.022150  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:05.521749  437269 type.go:168] "Request Body" body=""
	I1014 19:43:05.521888  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:05.522240  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:06.021896  437269 type.go:168] "Request Body" body=""
	I1014 19:43:06.021996  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:06.022415  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:06.022501  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:06.522060  437269 type.go:168] "Request Body" body=""
	I1014 19:43:06.522142  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:06.522496  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:07.022152  437269 type.go:168] "Request Body" body=""
	I1014 19:43:07.022255  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:07.022672  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:07.521243  437269 type.go:168] "Request Body" body=""
	I1014 19:43:07.521325  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:07.521730  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:08.021306  437269 type.go:168] "Request Body" body=""
	I1014 19:43:08.021386  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:08.021797  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:08.521379  437269 type.go:168] "Request Body" body=""
	I1014 19:43:08.521475  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:08.521854  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:08.521921  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:09.021427  437269 type.go:168] "Request Body" body=""
	I1014 19:43:09.021525  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:09.021943  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:09.521610  437269 type.go:168] "Request Body" body=""
	I1014 19:43:09.521709  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:09.522074  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:10.021890  437269 type.go:168] "Request Body" body=""
	I1014 19:43:10.021973  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:10.022317  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:10.522040  437269 type.go:168] "Request Body" body=""
	I1014 19:43:10.522122  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:10.522464  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:10.522545  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:11.021678  437269 type.go:168] "Request Body" body=""
	I1014 19:43:11.021775  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:11.022124  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:11.521786  437269 type.go:168] "Request Body" body=""
	I1014 19:43:11.521865  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:11.522285  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:12.021630  437269 type.go:168] "Request Body" body=""
	I1014 19:43:12.021721  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:12.022083  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:12.521655  437269 type.go:168] "Request Body" body=""
	I1014 19:43:12.521751  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:12.522185  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:13.021857  437269 type.go:168] "Request Body" body=""
	I1014 19:43:13.021947  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:13.022329  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:13.022419  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:13.521998  437269 type.go:168] "Request Body" body=""
	I1014 19:43:13.522076  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:13.522428  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:14.022232  437269 type.go:168] "Request Body" body=""
	I1014 19:43:14.022315  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:14.022692  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:14.521299  437269 type.go:168] "Request Body" body=""
	I1014 19:43:14.521379  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:14.521818  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:15.021769  437269 type.go:168] "Request Body" body=""
	I1014 19:43:15.021869  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:15.022238  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:15.521883  437269 type.go:168] "Request Body" body=""
	I1014 19:43:15.521969  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:15.522302  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:15.522372  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:16.021990  437269 type.go:168] "Request Body" body=""
	I1014 19:43:16.022071  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:16.022459  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:16.522107  437269 type.go:168] "Request Body" body=""
	I1014 19:43:16.522190  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:16.522527  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:17.022255  437269 type.go:168] "Request Body" body=""
	I1014 19:43:17.022335  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:17.022728  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:17.521281  437269 type.go:168] "Request Body" body=""
	I1014 19:43:17.521369  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:17.521726  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:18.021392  437269 type.go:168] "Request Body" body=""
	I1014 19:43:18.021485  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:18.021932  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:18.022012  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:18.521618  437269 type.go:168] "Request Body" body=""
	I1014 19:43:18.521708  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:18.522112  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:19.021718  437269 type.go:168] "Request Body" body=""
	I1014 19:43:19.021829  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:19.022200  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:19.521926  437269 type.go:168] "Request Body" body=""
	I1014 19:43:19.522009  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:19.522391  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:20.021218  437269 type.go:168] "Request Body" body=""
	I1014 19:43:20.021308  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:20.021706  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:20.521306  437269 type.go:168] "Request Body" body=""
	I1014 19:43:20.521386  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:20.521816  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:20.521893  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:21.021342  437269 type.go:168] "Request Body" body=""
	I1014 19:43:21.021427  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:21.021835  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:21.521377  437269 type.go:168] "Request Body" body=""
	I1014 19:43:21.521483  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:21.521876  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:22.021433  437269 type.go:168] "Request Body" body=""
	I1014 19:43:22.021530  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:22.021848  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:22.521448  437269 type.go:168] "Request Body" body=""
	I1014 19:43:22.521550  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:22.521980  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:22.522047  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:23.021566  437269 type.go:168] "Request Body" body=""
	I1014 19:43:23.021671  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:23.022058  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:23.521627  437269 type.go:168] "Request Body" body=""
	I1014 19:43:23.521736  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:23.522126  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:24.022029  437269 type.go:168] "Request Body" body=""
	I1014 19:43:24.022121  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:24.022504  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:24.522205  437269 type.go:168] "Request Body" body=""
	I1014 19:43:24.522294  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:24.522686  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:24.522787  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:25.021717  437269 type.go:168] "Request Body" body=""
	I1014 19:43:25.021820  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:25.022213  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:25.521882  437269 type.go:168] "Request Body" body=""
	I1014 19:43:25.521969  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:25.522345  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:26.021966  437269 type.go:168] "Request Body" body=""
	I1014 19:43:26.022053  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:26.022395  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:26.522078  437269 type.go:168] "Request Body" body=""
	I1014 19:43:26.522167  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:26.522591  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:27.022256  437269 type.go:168] "Request Body" body=""
	I1014 19:43:27.022347  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:27.022787  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:27.022856  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:27.521335  437269 type.go:168] "Request Body" body=""
	I1014 19:43:27.521438  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:27.521885  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:28.021454  437269 type.go:168] "Request Body" body=""
	I1014 19:43:28.021560  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:28.021963  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:28.521548  437269 type.go:168] "Request Body" body=""
	I1014 19:43:28.521631  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:28.522049  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:29.021606  437269 type.go:168] "Request Body" body=""
	I1014 19:43:29.021709  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:29.022129  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:29.521791  437269 type.go:168] "Request Body" body=""
	I1014 19:43:29.521879  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:29.522325  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:29.522390  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:30.022166  437269 type.go:168] "Request Body" body=""
	I1014 19:43:30.022260  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:30.022687  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:30.522272  437269 type.go:168] "Request Body" body=""
	I1014 19:43:30.522355  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:30.522747  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:31.021385  437269 type.go:168] "Request Body" body=""
	I1014 19:43:31.021484  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:31.021909  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:31.521491  437269 type.go:168] "Request Body" body=""
	I1014 19:43:31.521578  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:31.522023  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:32.021606  437269 type.go:168] "Request Body" body=""
	I1014 19:43:32.021692  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:32.022091  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:32.022172  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:32.521661  437269 type.go:168] "Request Body" body=""
	I1014 19:43:32.521740  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:32.522158  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:33.021717  437269 type.go:168] "Request Body" body=""
	I1014 19:43:33.021815  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:33.022209  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:33.521885  437269 type.go:168] "Request Body" body=""
	I1014 19:43:33.521973  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:33.522384  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:34.021211  437269 type.go:168] "Request Body" body=""
	I1014 19:43:34.021293  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:34.021699  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:34.521252  437269 type.go:168] "Request Body" body=""
	I1014 19:43:34.521332  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:34.521740  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:34.521854  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:35.021628  437269 type.go:168] "Request Body" body=""
	I1014 19:43:35.021734  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:35.022103  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:35.521777  437269 type.go:168] "Request Body" body=""
	I1014 19:43:35.521861  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:35.522282  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:36.021901  437269 type.go:168] "Request Body" body=""
	I1014 19:43:36.021991  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:36.022338  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:36.522081  437269 type.go:168] "Request Body" body=""
	I1014 19:43:36.522161  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:36.522532  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:36.522600  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:37.022222  437269 type.go:168] "Request Body" body=""
	I1014 19:43:37.022306  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:37.022680  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:37.521261  437269 type.go:168] "Request Body" body=""
	I1014 19:43:37.521365  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:37.521784  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:38.021342  437269 type.go:168] "Request Body" body=""
	I1014 19:43:38.021427  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:38.021897  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:38.521489  437269 type.go:168] "Request Body" body=""
	I1014 19:43:38.521583  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:38.521930  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:39.021573  437269 type.go:168] "Request Body" body=""
	I1014 19:43:39.021673  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:39.022106  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:39.022190  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:39.521695  437269 type.go:168] "Request Body" body=""
	I1014 19:43:39.521806  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:39.522190  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:40.022070  437269 type.go:168] "Request Body" body=""
	I1014 19:43:40.022155  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:40.022515  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:40.522191  437269 type.go:168] "Request Body" body=""
	I1014 19:43:40.522278  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:40.522665  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:41.021264  437269 type.go:168] "Request Body" body=""
	I1014 19:43:41.021347  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:41.021730  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:41.521285  437269 type.go:168] "Request Body" body=""
	I1014 19:43:41.521368  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:41.521747  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:41.521850  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:42.021332  437269 type.go:168] "Request Body" body=""
	I1014 19:43:42.021413  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:42.021835  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:42.521390  437269 type.go:168] "Request Body" body=""
	I1014 19:43:42.521492  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:42.521872  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:43.021448  437269 type.go:168] "Request Body" body=""
	I1014 19:43:43.021551  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:43.021984  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:43.521527  437269 type.go:168] "Request Body" body=""
	I1014 19:43:43.521610  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:43.521979  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:43.522054  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:44.021891  437269 type.go:168] "Request Body" body=""
	I1014 19:43:44.021982  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:44.022346  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:44.522015  437269 type.go:168] "Request Body" body=""
	I1014 19:43:44.522103  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:44.522480  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:45.021474  437269 type.go:168] "Request Body" body=""
	I1014 19:43:45.021561  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:45.021945  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:45.521543  437269 type.go:168] "Request Body" body=""
	I1014 19:43:45.521646  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:45.522059  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:45.522127  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:46.021638  437269 type.go:168] "Request Body" body=""
	I1014 19:43:46.021729  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:46.022191  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:46.521736  437269 type.go:168] "Request Body" body=""
	I1014 19:43:46.521839  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:46.522226  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:47.021891  437269 type.go:168] "Request Body" body=""
	I1014 19:43:47.021986  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:47.022382  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:47.522067  437269 type.go:168] "Request Body" body=""
	I1014 19:43:47.522151  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:47.522552  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:47.522621  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:48.022193  437269 type.go:168] "Request Body" body=""
	I1014 19:43:48.022285  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:48.022636  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:48.521224  437269 type.go:168] "Request Body" body=""
	I1014 19:43:48.521322  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:48.521716  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:49.021262  437269 type.go:168] "Request Body" body=""
	I1014 19:43:49.021340  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:49.021716  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:49.521334  437269 type.go:168] "Request Body" body=""
	I1014 19:43:49.521413  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:49.521823  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:50.021743  437269 type.go:168] "Request Body" body=""
	I1014 19:43:50.021874  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:50.022283  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:50.022349  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:50.521963  437269 type.go:168] "Request Body" body=""
	I1014 19:43:50.522049  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:50.522461  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:51.022176  437269 type.go:168] "Request Body" body=""
	I1014 19:43:51.022266  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:51.022629  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:51.522282  437269 type.go:168] "Request Body" body=""
	I1014 19:43:51.522383  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:51.522865  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:52.021416  437269 type.go:168] "Request Body" body=""
	I1014 19:43:52.021507  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:52.021884  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:52.521517  437269 type.go:168] "Request Body" body=""
	I1014 19:43:52.521611  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:52.522082  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:52.522155  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:53.021656  437269 type.go:168] "Request Body" body=""
	I1014 19:43:53.021742  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:53.022136  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:53.521806  437269 type.go:168] "Request Body" body=""
	I1014 19:43:53.521891  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:53.522261  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:54.022341  437269 type.go:168] "Request Body" body=""
	I1014 19:43:54.022440  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:54.022890  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:54.521448  437269 type.go:168] "Request Body" body=""
	I1014 19:43:54.521552  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:54.521966  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:55.021854  437269 type.go:168] "Request Body" body=""
	I1014 19:43:55.021934  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:55.022336  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:55.022402  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:55.521987  437269 type.go:168] "Request Body" body=""
	I1014 19:43:55.522071  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:55.522460  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:56.022232  437269 type.go:168] "Request Body" body=""
	I1014 19:43:56.022316  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:56.022653  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:56.521227  437269 type.go:168] "Request Body" body=""
	I1014 19:43:56.521302  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:56.521701  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:57.021269  437269 type.go:168] "Request Body" body=""
	I1014 19:43:57.021349  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:57.021719  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:57.521302  437269 type.go:168] "Request Body" body=""
	I1014 19:43:57.521398  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:57.521838  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:57.521899  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:58.021391  437269 type.go:168] "Request Body" body=""
	I1014 19:43:58.021485  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:58.021875  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:58.521454  437269 type.go:168] "Request Body" body=""
	I1014 19:43:58.521550  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:58.521987  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:59.021602  437269 type.go:168] "Request Body" body=""
	I1014 19:43:59.021701  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:59.022089  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:59.521704  437269 type.go:168] "Request Body" body=""
	I1014 19:43:59.521805  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:59.522205  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:59.522272  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:00.022040  437269 type.go:168] "Request Body" body=""
	I1014 19:44:00.022132  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:00.022504  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:00.522200  437269 type.go:168] "Request Body" body=""
	I1014 19:44:00.522297  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:00.522735  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:01.021297  437269 type.go:168] "Request Body" body=""
	I1014 19:44:01.021387  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:01.021784  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:01.521307  437269 type.go:168] "Request Body" body=""
	I1014 19:44:01.521399  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:01.521850  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:02.021406  437269 type.go:168] "Request Body" body=""
	I1014 19:44:02.021500  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:02.021877  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:02.021945  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:02.521436  437269 type.go:168] "Request Body" body=""
	I1014 19:44:02.521539  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:02.521953  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:03.021516  437269 type.go:168] "Request Body" body=""
	I1014 19:44:03.021598  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:03.022005  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:03.521561  437269 type.go:168] "Request Body" body=""
	I1014 19:44:03.521646  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:03.522077  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:04.021994  437269 type.go:168] "Request Body" body=""
	I1014 19:44:04.022079  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:04.022499  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:04.022572  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:04.522163  437269 type.go:168] "Request Body" body=""
	I1014 19:44:04.522255  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:04.522672  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:05.021565  437269 type.go:168] "Request Body" body=""
	I1014 19:44:05.021656  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:05.022053  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:05.521629  437269 type.go:168] "Request Body" body=""
	I1014 19:44:05.521713  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:05.522128  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:06.021689  437269 type.go:168] "Request Body" body=""
	I1014 19:44:06.021801  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:06.022188  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:06.521851  437269 type.go:168] "Request Body" body=""
	I1014 19:44:06.521937  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:06.522347  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:06.522417  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:07.022007  437269 type.go:168] "Request Body" body=""
	I1014 19:44:07.022086  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:07.022436  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:07.522203  437269 type.go:168] "Request Body" body=""
	I1014 19:44:07.522282  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:07.522638  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:08.021309  437269 type.go:168] "Request Body" body=""
	I1014 19:44:08.021397  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:08.021803  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:08.521985  437269 type.go:168] "Request Body" body=""
	I1014 19:44:08.522062  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:08.522422  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:08.522484  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:09.022109  437269 type.go:168] "Request Body" body=""
	I1014 19:44:09.022199  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:09.022550  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:09.522226  437269 type.go:168] "Request Body" body=""
	I1014 19:44:09.522312  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:09.522687  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:10.021566  437269 type.go:168] "Request Body" body=""
	I1014 19:44:10.021708  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:10.022064  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:10.521657  437269 type.go:168] "Request Body" body=""
	I1014 19:44:10.521776  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:10.522143  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:11.021701  437269 type.go:168] "Request Body" body=""
	I1014 19:44:11.021797  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:11.022127  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:11.022194  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:11.521807  437269 type.go:168] "Request Body" body=""
	I1014 19:44:11.521884  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:11.522263  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:12.021962  437269 type.go:168] "Request Body" body=""
	I1014 19:44:12.022049  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:12.022424  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:12.522133  437269 type.go:168] "Request Body" body=""
	I1014 19:44:12.522233  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:12.522615  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:13.022268  437269 type.go:168] "Request Body" body=""
	I1014 19:44:13.022358  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:13.022774  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:13.022845  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:13.521351  437269 type.go:168] "Request Body" body=""
	I1014 19:44:13.521431  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:13.521806  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:14.021818  437269 type.go:168] "Request Body" body=""
	I1014 19:44:14.021912  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:14.022342  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:14.522064  437269 type.go:168] "Request Body" body=""
	I1014 19:44:14.522156  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:14.522518  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:15.021381  437269 type.go:168] "Request Body" body=""
	I1014 19:44:15.021468  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:15.021826  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:15.521382  437269 type.go:168] "Request Body" body=""
	I1014 19:44:15.521487  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:15.521856  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:15.521934  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:16.021382  437269 type.go:168] "Request Body" body=""
	I1014 19:44:16.021472  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:16.021855  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:16.521402  437269 type.go:168] "Request Body" body=""
	I1014 19:44:16.521496  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:16.521958  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:17.021537  437269 type.go:168] "Request Body" body=""
	I1014 19:44:17.021618  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:17.022006  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:17.521572  437269 type.go:168] "Request Body" body=""
	I1014 19:44:17.521652  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:17.522068  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:17.522135  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:18.021636  437269 type.go:168] "Request Body" body=""
	I1014 19:44:18.021735  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:18.022112  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:18.521664  437269 type.go:168] "Request Body" body=""
	I1014 19:44:18.521790  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:18.522173  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:19.021791  437269 type.go:168] "Request Body" body=""
	I1014 19:44:19.021887  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:19.022264  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:19.521890  437269 type.go:168] "Request Body" body=""
	I1014 19:44:19.521989  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:19.522366  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:19.522432  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:20.022234  437269 type.go:168] "Request Body" body=""
	I1014 19:44:20.022313  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:20.022654  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:20.521239  437269 type.go:168] "Request Body" body=""
	I1014 19:44:20.521321  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:20.521737  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:21.021357  437269 type.go:168] "Request Body" body=""
	I1014 19:44:21.021447  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:21.021856  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:21.521454  437269 type.go:168] "Request Body" body=""
	I1014 19:44:21.521555  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:21.521969  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:22.021534  437269 type.go:168] "Request Body" body=""
	I1014 19:44:22.021630  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:22.022029  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:22.022098  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:22.521619  437269 type.go:168] "Request Body" body=""
	I1014 19:44:22.521729  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:22.522128  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:23.021712  437269 type.go:168] "Request Body" body=""
	I1014 19:44:23.021820  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:23.022176  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:23.521802  437269 type.go:168] "Request Body" body=""
	I1014 19:44:23.521885  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:23.522258  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:24.022112  437269 type.go:168] "Request Body" body=""
	I1014 19:44:24.022201  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:24.022532  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:24.022600  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:24.522195  437269 type.go:168] "Request Body" body=""
	I1014 19:44:24.522287  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:24.522634  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:25.021596  437269 type.go:168] "Request Body" body=""
	I1014 19:44:25.021676  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:25.022088  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:25.521654  437269 type.go:168] "Request Body" body=""
	I1014 19:44:25.521741  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:25.522131  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:26.021684  437269 type.go:168] "Request Body" body=""
	I1014 19:44:26.021798  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:26.022168  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:26.521801  437269 type.go:168] "Request Body" body=""
	I1014 19:44:26.521880  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:26.522232  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:26.522299  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:27.021847  437269 type.go:168] "Request Body" body=""
	I1014 19:44:27.021933  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:27.022292  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:27.521878  437269 type.go:168] "Request Body" body=""
	I1014 19:44:27.521963  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:27.522328  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:28.021519  437269 type.go:168] "Request Body" body=""
	I1014 19:44:28.021599  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:28.021968  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:28.521573  437269 type.go:168] "Request Body" body=""
	I1014 19:44:28.521667  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:28.522077  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:29.021709  437269 type.go:168] "Request Body" body=""
	I1014 19:44:29.021839  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:29.022235  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:29.022308  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:29.521910  437269 type.go:168] "Request Body" body=""
	I1014 19:44:29.522006  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:29.522371  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:30.021252  437269 type.go:168] "Request Body" body=""
	I1014 19:44:30.021348  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:30.021744  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:30.521308  437269 type.go:168] "Request Body" body=""
	I1014 19:44:30.521407  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:30.521858  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:31.021447  437269 type.go:168] "Request Body" body=""
	I1014 19:44:31.021537  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:31.021993  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:31.521577  437269 type.go:168] "Request Body" body=""
	I1014 19:44:31.521661  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:31.522091  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:31.522171  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:32.021679  437269 type.go:168] "Request Body" body=""
	I1014 19:44:32.021778  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:32.022180  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:32.521862  437269 type.go:168] "Request Body" body=""
	I1014 19:44:32.521962  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:32.522305  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:33.022031  437269 type.go:168] "Request Body" body=""
	I1014 19:44:33.022124  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:33.022484  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:33.522216  437269 type.go:168] "Request Body" body=""
	I1014 19:44:33.522294  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:33.522643  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:33.522730  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:34.021707  437269 type.go:168] "Request Body" body=""
	I1014 19:44:34.021853  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:34.022332  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:34.522025  437269 type.go:168] "Request Body" body=""
	I1014 19:44:34.522147  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:34.522536  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:35.021511  437269 type.go:168] "Request Body" body=""
	I1014 19:44:35.021620  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:35.022043  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:35.522236  437269 type.go:168] "Request Body" body=""
	I1014 19:44:35.522316  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:35.522681  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:36.021229  437269 type.go:168] "Request Body" body=""
	I1014 19:44:36.021313  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:36.021734  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:36.021830  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:36.521316  437269 type.go:168] "Request Body" body=""
	I1014 19:44:36.521393  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:36.521798  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:37.021352  437269 type.go:168] "Request Body" body=""
	I1014 19:44:37.021434  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:37.021888  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:37.521479  437269 type.go:168] "Request Body" body=""
	I1014 19:44:37.521566  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:37.521949  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:38.021522  437269 type.go:168] "Request Body" body=""
	I1014 19:44:38.021608  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:38.022020  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:38.022085  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:38.521582  437269 type.go:168] "Request Body" body=""
	I1014 19:44:38.521671  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:38.522063  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:39.021622  437269 type.go:168] "Request Body" body=""
	I1014 19:44:39.021702  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:39.022125  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:39.521740  437269 type.go:168] "Request Body" body=""
	I1014 19:44:39.521841  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:39.522231  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:40.022072  437269 type.go:168] "Request Body" body=""
	I1014 19:44:40.022157  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:40.022496  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:40.022560  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:40.522145  437269 type.go:168] "Request Body" body=""
	I1014 19:44:40.522230  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:40.522581  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:41.021191  437269 type.go:168] "Request Body" body=""
	I1014 19:44:41.021271  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:41.021663  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:41.521242  437269 type.go:168] "Request Body" body=""
	I1014 19:44:41.521325  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:41.521677  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:42.021221  437269 type.go:168] "Request Body" body=""
	I1014 19:44:42.021300  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:42.021721  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:42.521295  437269 type.go:168] "Request Body" body=""
	I1014 19:44:42.521377  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:42.521793  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:42.521860  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:43.021377  437269 type.go:168] "Request Body" body=""
	I1014 19:44:43.021470  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:43.021882  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:43.521445  437269 type.go:168] "Request Body" body=""
	I1014 19:44:43.521535  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:43.521905  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:44.021811  437269 type.go:168] "Request Body" body=""
	I1014 19:44:44.021903  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:44.022312  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:44.521977  437269 type.go:168] "Request Body" body=""
	I1014 19:44:44.522062  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:44.522405  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:44.522472  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:45.021229  437269 type.go:168] "Request Body" body=""
	I1014 19:44:45.021316  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:45.021700  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:45.521363  437269 type.go:168] "Request Body" body=""
	I1014 19:44:45.521476  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:45.521862  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:46.021400  437269 type.go:168] "Request Body" body=""
	I1014 19:44:46.021493  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:46.021898  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:46.521589  437269 type.go:168] "Request Body" body=""
	I1014 19:44:46.521682  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:46.522048  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:47.021649  437269 type.go:168] "Request Body" body=""
	I1014 19:44:47.021730  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:47.022119  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:47.022190  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:47.521670  437269 type.go:168] "Request Body" body=""
	I1014 19:44:47.521746  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:47.522086  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:48.021745  437269 type.go:168] "Request Body" body=""
	I1014 19:44:48.021850  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:48.022200  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:48.521828  437269 type.go:168] "Request Body" body=""
	I1014 19:44:48.521908  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:48.522263  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:49.021930  437269 type.go:168] "Request Body" body=""
	I1014 19:44:49.022025  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:49.022391  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:49.022471  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:49.522012  437269 type.go:168] "Request Body" body=""
	I1014 19:44:49.522093  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:49.522436  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:50.021280  437269 type.go:168] "Request Body" body=""
	I1014 19:44:50.021359  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:50.021746  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:50.521299  437269 type.go:168] "Request Body" body=""
	I1014 19:44:50.521381  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:50.521749  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:51.021292  437269 type.go:168] "Request Body" body=""
	I1014 19:44:51.021375  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:51.021830  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:51.521389  437269 type.go:168] "Request Body" body=""
	I1014 19:44:51.521483  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:51.521862  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:51.521938  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:52.021392  437269 type.go:168] "Request Body" body=""
	I1014 19:44:52.021501  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:52.021933  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:52.521524  437269 type.go:168] "Request Body" body=""
	I1014 19:44:52.521606  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:52.522002  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:53.021549  437269 type.go:168] "Request Body" body=""
	I1014 19:44:53.021630  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:53.022033  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:53.521638  437269 type.go:168] "Request Body" body=""
	I1014 19:44:53.521719  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:53.522129  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:53.522202  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:54.022063  437269 type.go:168] "Request Body" body=""
	I1014 19:44:54.022155  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:54.022563  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:54.522249  437269 type.go:168] "Request Body" body=""
	I1014 19:44:54.522346  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:54.522749  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:55.021666  437269 type.go:168] "Request Body" body=""
	I1014 19:44:55.021750  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:55.022126  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:55.521738  437269 type.go:168] "Request Body" body=""
	I1014 19:44:55.521847  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:55.522237  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:55.522304  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:56.021875  437269 type.go:168] "Request Body" body=""
	I1014 19:44:56.021958  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:56.022317  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:56.521953  437269 type.go:168] "Request Body" body=""
	I1014 19:44:56.522031  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:56.522402  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:57.022099  437269 type.go:168] "Request Body" body=""
	I1014 19:44:57.022184  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:57.022571  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:57.522215  437269 type.go:168] "Request Body" body=""
	I1014 19:44:57.522295  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:57.522635  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:57.522721  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:58.021247  437269 type.go:168] "Request Body" body=""
	I1014 19:44:58.021331  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:58.021778  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:58.521330  437269 type.go:168] "Request Body" body=""
	I1014 19:44:58.521406  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:58.521792  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:59.021307  437269 type.go:168] "Request Body" body=""
	I1014 19:44:59.021390  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:59.021783  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:59.521317  437269 type.go:168] "Request Body" body=""
	I1014 19:44:59.521404  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:59.521833  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:00.021727  437269 type.go:168] "Request Body" body=""
	I1014 19:45:00.021828  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:00.022220  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:00.022290  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:00.521874  437269 type.go:168] "Request Body" body=""
	I1014 19:45:00.521969  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:00.522342  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:01.022108  437269 type.go:168] "Request Body" body=""
	I1014 19:45:01.022195  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:01.022598  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:01.521221  437269 type.go:168] "Request Body" body=""
	I1014 19:45:01.521312  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:01.521684  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:02.021247  437269 type.go:168] "Request Body" body=""
	I1014 19:45:02.021345  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:02.021741  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:02.521281  437269 type.go:168] "Request Body" body=""
	I1014 19:45:02.521368  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:02.521783  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:02.521850  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:03.021427  437269 type.go:168] "Request Body" body=""
	I1014 19:45:03.021538  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:03.022017  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:03.521576  437269 type.go:168] "Request Body" body=""
	I1014 19:45:03.521665  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:03.522065  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:04.021968  437269 type.go:168] "Request Body" body=""
	I1014 19:45:04.022064  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:04.022412  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:04.522089  437269 type.go:168] "Request Body" body=""
	I1014 19:45:04.522186  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:04.522588  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:04.522669  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:05.021532  437269 type.go:168] "Request Body" body=""
	I1014 19:45:05.021627  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:05.022032  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:05.521660  437269 type.go:168] "Request Body" body=""
	I1014 19:45:05.521743  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:05.522144  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:06.021836  437269 type.go:168] "Request Body" body=""
	I1014 19:45:06.021915  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:06.022313  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:06.522006  437269 type.go:168] "Request Body" body=""
	I1014 19:45:06.522090  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:06.522505  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:07.022194  437269 type.go:168] "Request Body" body=""
	I1014 19:45:07.022282  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:07.022657  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:07.022726  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:07.522255  437269 type.go:168] "Request Body" body=""
	I1014 19:45:07.522341  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:07.522733  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:08.021293  437269 type.go:168] "Request Body" body=""
	I1014 19:45:08.021376  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:08.021784  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:08.521329  437269 type.go:168] "Request Body" body=""
	I1014 19:45:08.521407  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:08.521815  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:09.021335  437269 type.go:168] "Request Body" body=""
	I1014 19:45:09.021426  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:09.021821  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:09.521354  437269 type.go:168] "Request Body" body=""
	I1014 19:45:09.521433  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:09.521870  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:09.521948  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:10.021750  437269 type.go:168] "Request Body" body=""
	I1014 19:45:10.021864  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:10.022248  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:10.521887  437269 type.go:168] "Request Body" body=""
	I1014 19:45:10.521973  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:10.522362  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:11.022015  437269 type.go:168] "Request Body" body=""
	I1014 19:45:11.022096  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:11.022432  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:11.522073  437269 type.go:168] "Request Body" body=""
	I1014 19:45:11.522158  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:11.522547  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:11.522623  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:12.022259  437269 type.go:168] "Request Body" body=""
	I1014 19:45:12.022347  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:12.022850  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:12.521359  437269 type.go:168] "Request Body" body=""
	I1014 19:45:12.521448  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:12.521849  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:13.021409  437269 type.go:168] "Request Body" body=""
	I1014 19:45:13.021494  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:13.021916  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:13.521532  437269 type.go:168] "Request Body" body=""
	I1014 19:45:13.521618  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:13.521981  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:14.021969  437269 type.go:168] "Request Body" body=""
	I1014 19:45:14.022049  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:14.022447  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:14.022510  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:14.522094  437269 type.go:168] "Request Body" body=""
	I1014 19:45:14.522176  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:14.522545  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:15.021509  437269 type.go:168] "Request Body" body=""
	I1014 19:45:15.021606  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:15.022033  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:15.521593  437269 type.go:168] "Request Body" body=""
	I1014 19:45:15.521690  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:15.522096  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:16.021646  437269 type.go:168] "Request Body" body=""
	I1014 19:45:16.021736  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:16.022135  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:16.521804  437269 type.go:168] "Request Body" body=""
	I1014 19:45:16.521890  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:16.522248  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:16.522324  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:17.021975  437269 type.go:168] "Request Body" body=""
	I1014 19:45:17.022056  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:17.022447  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:17.522108  437269 type.go:168] "Request Body" body=""
	I1014 19:45:17.522191  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:17.522594  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:18.022251  437269 type.go:168] "Request Body" body=""
	I1014 19:45:18.022333  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:18.022725  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:18.521289  437269 type.go:168] "Request Body" body=""
	I1014 19:45:18.521376  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:18.521812  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:19.021383  437269 type.go:168] "Request Body" body=""
	I1014 19:45:19.021484  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:19.021904  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:19.021980  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:19.521516  437269 type.go:168] "Request Body" body=""
	I1014 19:45:19.521604  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:19.522056  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:20.021651  437269 type.go:168] "Request Body" body=""
	I1014 19:45:20.021732  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:20.022182  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:20.521732  437269 type.go:168] "Request Body" body=""
	I1014 19:45:20.521838  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:20.522198  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:21.021907  437269 type.go:168] "Request Body" body=""
	I1014 19:45:21.021987  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:21.022351  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:21.022430  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:21.521976  437269 type.go:168] "Request Body" body=""
	I1014 19:45:21.522056  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:21.522417  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:22.022086  437269 type.go:168] "Request Body" body=""
	I1014 19:45:22.022170  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:22.022544  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:22.522193  437269 type.go:168] "Request Body" body=""
	I1014 19:45:22.522282  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:22.522668  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:23.021253  437269 type.go:168] "Request Body" body=""
	I1014 19:45:23.021333  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:23.021784  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:23.521356  437269 type.go:168] "Request Body" body=""
	I1014 19:45:23.521450  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:23.521977  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:23.522059  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:24.021741  437269 type.go:168] "Request Body" body=""
	I1014 19:45:24.021842  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:24.022224  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:24.521890  437269 type.go:168] "Request Body" body=""
	I1014 19:45:24.521984  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:24.522357  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:25.022258  437269 type.go:168] "Request Body" body=""
	I1014 19:45:25.022360  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:25.022739  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:25.521985  437269 type.go:168] "Request Body" body=""
	I1014 19:45:25.522068  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:25.522428  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:25.522491  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:26.022071  437269 type.go:168] "Request Body" body=""
	I1014 19:45:26.022170  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:26.022519  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:26.521198  437269 type.go:168] "Request Body" body=""
	I1014 19:45:26.521288  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:26.521676  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:27.021978  437269 type.go:168] "Request Body" body=""
	I1014 19:45:27.022083  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:27.022419  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:27.522151  437269 type.go:168] "Request Body" body=""
	I1014 19:45:27.522230  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:27.522643  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:27.522714  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:28.021218  437269 type.go:168] "Request Body" body=""
	I1014 19:45:28.021312  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:28.021730  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:28.521312  437269 type.go:168] "Request Body" body=""
	I1014 19:45:28.521403  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:28.521840  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:29.021354  437269 type.go:168] "Request Body" body=""
	I1014 19:45:29.021435  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:29.021854  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:29.521378  437269 type.go:168] "Request Body" body=""
	I1014 19:45:29.521458  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:29.521850  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:30.021662  437269 type.go:168] "Request Body" body=""
	I1014 19:45:30.021789  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:30.022146  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:30.022213  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:30.521738  437269 type.go:168] "Request Body" body=""
	I1014 19:45:30.521833  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:30.522211  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:31.021880  437269 type.go:168] "Request Body" body=""
	I1014 19:45:31.021993  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:31.022332  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:31.522123  437269 type.go:168] "Request Body" body=""
	I1014 19:45:31.522204  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:31.522575  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:32.022205  437269 type.go:168] "Request Body" body=""
	I1014 19:45:32.022295  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:32.022647  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:32.022725  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:32.521198  437269 type.go:168] "Request Body" body=""
	I1014 19:45:32.521290  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:32.521668  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:33.021206  437269 type.go:168] "Request Body" body=""
	I1014 19:45:33.021284  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:33.021669  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:33.521252  437269 type.go:168] "Request Body" body=""
	I1014 19:45:33.521335  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:33.521732  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:34.021648  437269 type.go:168] "Request Body" body=""
	I1014 19:45:34.021738  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:34.022124  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:34.521677  437269 type.go:168] "Request Body" body=""
	I1014 19:45:34.521786  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:34.522167  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:34.522228  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:35.021984  437269 type.go:168] "Request Body" body=""
	I1014 19:45:35.022074  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:35.022422  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:35.522074  437269 type.go:168] "Request Body" body=""
	I1014 19:45:35.522161  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:35.522560  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:36.022246  437269 type.go:168] "Request Body" body=""
	I1014 19:45:36.022332  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:36.022735  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:36.521326  437269 type.go:168] "Request Body" body=""
	I1014 19:45:36.521412  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:36.521843  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:37.021388  437269 type.go:168] "Request Body" body=""
	I1014 19:45:37.021485  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:37.021891  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:37.021957  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:37.521503  437269 type.go:168] "Request Body" body=""
	I1014 19:45:37.521585  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:37.522005  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:38.021579  437269 type.go:168] "Request Body" body=""
	I1014 19:45:38.021679  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:38.022059  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:38.521663  437269 type.go:168] "Request Body" body=""
	I1014 19:45:38.521751  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:38.522160  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:39.021909  437269 type.go:168] "Request Body" body=""
	I1014 19:45:39.021996  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:39.022378  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:39.022449  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:39.522030  437269 type.go:168] "Request Body" body=""
	I1014 19:45:39.522107  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:39.522416  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:40.021388  437269 type.go:168] "Request Body" body=""
	I1014 19:45:40.021481  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:40.021844  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:40.521422  437269 type.go:168] "Request Body" body=""
	I1014 19:45:40.521523  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:40.521966  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:41.021564  437269 type.go:168] "Request Body" body=""
	I1014 19:45:41.021641  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:41.022031  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:41.521648  437269 type.go:168] "Request Body" body=""
	I1014 19:45:41.521734  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:41.522167  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:41.522236  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:42.021731  437269 type.go:168] "Request Body" body=""
	I1014 19:45:42.021836  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:42.022192  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:42.521731  437269 type.go:168] "Request Body" body=""
	I1014 19:45:42.521839  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:42.522217  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:43.021906  437269 type.go:168] "Request Body" body=""
	I1014 19:45:43.021993  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:43.022331  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:43.522111  437269 type.go:168] "Request Body" body=""
	I1014 19:45:43.522198  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:43.522589  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:43.522675  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:44.021291  437269 type.go:168] "Request Body" body=""
	I1014 19:45:44.021372  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:44.021800  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:44.521363  437269 type.go:168] "Request Body" body=""
	I1014 19:45:44.521449  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:44.521869  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:45.021752  437269 type.go:168] "Request Body" body=""
	I1014 19:45:45.021850  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:45.022233  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:45.521855  437269 type.go:168] "Request Body" body=""
	I1014 19:45:45.521941  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:45.522316  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:46.022006  437269 type.go:168] "Request Body" body=""
	I1014 19:45:46.022095  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:46.022499  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:46.022579  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:46.522210  437269 type.go:168] "Request Body" body=""
	I1014 19:45:46.522318  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:46.522722  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:47.021283  437269 type.go:168] "Request Body" body=""
	I1014 19:45:47.021385  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:47.021781  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:47.521429  437269 type.go:168] "Request Body" body=""
	I1014 19:45:47.521536  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:47.521995  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:48.021575  437269 type.go:168] "Request Body" body=""
	I1014 19:45:48.021686  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:48.022099  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:48.521787  437269 type.go:168] "Request Body" body=""
	I1014 19:45:48.521871  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:48.522261  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:48.522369  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:49.021944  437269 type.go:168] "Request Body" body=""
	I1014 19:45:49.022027  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:49.022513  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:49.522168  437269 type.go:168] "Request Body" body=""
	I1014 19:45:49.522247  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:49.522598  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:50.021501  437269 type.go:168] "Request Body" body=""
	I1014 19:45:50.021615  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:50.022004  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:50.521581  437269 type.go:168] "Request Body" body=""
	I1014 19:45:50.521669  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:50.522045  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:51.021656  437269 type.go:168] "Request Body" body=""
	I1014 19:45:51.021788  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:51.022144  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:51.022212  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:51.521847  437269 type.go:168] "Request Body" body=""
	I1014 19:45:51.521925  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:51.522299  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:52.022088  437269 type.go:168] "Request Body" body=""
	I1014 19:45:52.022197  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:52.022587  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:52.522247  437269 type.go:168] "Request Body" body=""
	I1014 19:45:52.522330  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:52.522658  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:53.021334  437269 type.go:168] "Request Body" body=""
	I1014 19:45:53.021438  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:53.021860  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:53.521371  437269 type.go:168] "Request Body" body=""
	I1014 19:45:53.521458  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:53.521812  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:53.521887  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:54.021737  437269 type.go:168] "Request Body" body=""
	I1014 19:45:54.021853  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:54.022236  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:54.521871  437269 type.go:168] "Request Body" body=""
	I1014 19:45:54.521952  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:54.522300  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:55.022188  437269 type.go:168] "Request Body" body=""
	I1014 19:45:55.022267  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:55.022698  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:55.521299  437269 type.go:168] "Request Body" body=""
	I1014 19:45:55.521387  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:55.521745  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:56.021324  437269 type.go:168] "Request Body" body=""
	I1014 19:45:56.021405  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:56.021853  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:56.021933  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:56.521381  437269 type.go:168] "Request Body" body=""
	I1014 19:45:56.521492  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:56.521856  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:57.021449  437269 type.go:168] "Request Body" body=""
	I1014 19:45:57.021569  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:57.022053  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:57.521631  437269 type.go:168] "Request Body" body=""
	I1014 19:45:57.521711  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:57.522096  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:58.021695  437269 type.go:168] "Request Body" body=""
	I1014 19:45:58.021812  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:58.022220  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:58.022300  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:58.521874  437269 type.go:168] "Request Body" body=""
	I1014 19:45:58.521965  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:58.522333  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:59.021991  437269 type.go:168] "Request Body" body=""
	I1014 19:45:59.022083  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:59.022475  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:59.522167  437269 type.go:168] "Request Body" body=""
	I1014 19:45:59.522245  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:59.522597  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:00.021599  437269 type.go:168] "Request Body" body=""
	I1014 19:46:00.021701  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:00.022127  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:00.521743  437269 type.go:168] "Request Body" body=""
	I1014 19:46:00.521861  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:00.522238  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:46:00.522338  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:46:01.022015  437269 type.go:168] "Request Body" body=""
	I1014 19:46:01.022109  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:01.022496  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:01.522199  437269 type.go:168] "Request Body" body=""
	I1014 19:46:01.522284  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:01.522792  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:02.021313  437269 type.go:168] "Request Body" body=""
	I1014 19:46:02.021414  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:02.021802  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:02.521355  437269 type.go:168] "Request Body" body=""
	I1014 19:46:02.521435  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:02.521837  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:03.021400  437269 type.go:168] "Request Body" body=""
	I1014 19:46:03.021512  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:03.021843  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:46:03.021936  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:46:03.521495  437269 type.go:168] "Request Body" body=""
	I1014 19:46:03.521638  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:03.522055  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:04.022126  437269 type.go:168] "Request Body" body=""
	I1014 19:46:04.022216  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:04.022594  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:04.522216  437269 type.go:168] "Request Body" body=""
	I1014 19:46:04.522303  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:04.522679  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:05.021591  437269 type.go:168] "Request Body" body=""
	I1014 19:46:05.021704  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:05.022095  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:46:05.022161  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:46:05.521689  437269 type.go:168] "Request Body" body=""
	I1014 19:46:05.521808  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:05.522192  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:06.021790  437269 type.go:168] "Request Body" body=""
	I1014 19:46:06.021897  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:06.022280  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:06.521951  437269 type.go:168] "Request Body" body=""
	I1014 19:46:06.522040  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:06.522397  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:07.022069  437269 type.go:168] "Request Body" body=""
	I1014 19:46:07.022173  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:07.022542  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:46:07.022606  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:46:07.522218  437269 type.go:168] "Request Body" body=""
	I1014 19:46:07.522298  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:07.522637  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:08.021220  437269 type.go:168] "Request Body" body=""
	I1014 19:46:08.021314  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:08.021696  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:08.521279  437269 type.go:168] "Request Body" body=""
	I1014 19:46:08.521359  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:08.521778  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:09.021343  437269 type.go:168] "Request Body" body=""
	I1014 19:46:09.021451  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:09.021866  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:09.521382  437269 type.go:168] "Request Body" body=""
	I1014 19:46:09.521459  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:09.521838  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:46:09.521913  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:46:10.021664  437269 type.go:168] "Request Body" body=""
	I1014 19:46:10.021744  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:10.022128  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:10.521668  437269 type.go:168] "Request Body" body=""
	I1014 19:46:10.521745  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:10.522134  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:11.021709  437269 type.go:168] "Request Body" body=""
	I1014 19:46:11.021817  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:11.022226  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:11.521863  437269 type.go:168] "Request Body" body=""
	I1014 19:46:11.521950  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:11.522316  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:46:11.522391  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:46:12.022004  437269 type.go:168] "Request Body" body=""
	I1014 19:46:12.022083  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:12.022466  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:12.522152  437269 type.go:168] "Request Body" body=""
	I1014 19:46:12.522231  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:12.522572  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:13.022208  437269 type.go:168] "Request Body" body=""
	I1014 19:46:13.022306  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:13.022686  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:13.521212  437269 type.go:168] "Request Body" body=""
	I1014 19:46:13.521286  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:13.521620  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:14.021358  437269 type.go:168] "Request Body" body=""
	I1014 19:46:14.021443  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:14.021869  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:46:14.021948  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:46:14.521427  437269 type.go:168] "Request Body" body=""
	I1014 19:46:14.521526  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:14.521830  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:15.021689  437269 type.go:168] "Request Body" body=""
	I1014 19:46:15.021842  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:15.022202  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:15.521922  437269 type.go:168] "Request Body" body=""
	I1014 19:46:15.522020  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:15.522429  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:16.022119  437269 type.go:168] "Request Body" body=""
	I1014 19:46:16.022199  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:16.022517  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:46:16.022586  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:46:16.521207  437269 type.go:168] "Request Body" body=""
	I1014 19:46:16.521315  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:16.521711  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:17.021272  437269 type.go:168] "Request Body" body=""
	I1014 19:46:17.021355  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:17.021723  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:17.521289  437269 type.go:168] "Request Body" body=""
	I1014 19:46:17.521390  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:17.521811  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:18.021359  437269 type.go:168] "Request Body" body=""
	I1014 19:46:18.021443  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:18.021849  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:18.521429  437269 type.go:168] "Request Body" body=""
	I1014 19:46:18.521529  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:18.521905  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:46:18.521988  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:46:19.021521  437269 type.go:168] "Request Body" body=""
	I1014 19:46:19.021615  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:19.022010  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:19.521715  437269 type.go:168] "Request Body" body=""
	I1014 19:46:19.521866  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:19.522297  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:20.022176  437269 type.go:168] "Request Body" body=""
	I1014 19:46:20.022258  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:20.022646  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:20.522243  437269 type.go:168] "Request Body" body=""
	I1014 19:46:20.522333  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:20.522713  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:46:20.522805  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:46:21.021280  437269 type.go:168] "Request Body" body=""
	I1014 19:46:21.021386  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:21.021805  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:21.521347  437269 type.go:168] "Request Body" body=""
	I1014 19:46:21.521438  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:21.521811  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:22.021364  437269 type.go:168] "Request Body" body=""
	I1014 19:46:22.021456  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:22.021861  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:22.521399  437269 type.go:168] "Request Body" body=""
	I1014 19:46:22.521520  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:22.521917  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:23.021531  437269 type.go:168] "Request Body" body=""
	I1014 19:46:23.021637  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:23.022036  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:46:23.022100  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:46:23.521619  437269 type.go:168] "Request Body" body=""
	I1014 19:46:23.521711  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:23.522062  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:24.021884  437269 type.go:168] "Request Body" body=""
	I1014 19:46:24.021977  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:24.022350  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:24.522011  437269 type.go:168] "Request Body" body=""
	I1014 19:46:24.522097  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:24.522508  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:25.021512  437269 type.go:168] "Request Body" body=""
	I1014 19:46:25.021596  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:25.022033  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:25.521632  437269 type.go:168] "Request Body" body=""
	I1014 19:46:25.521726  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:25.522148  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:46:25.522244  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:46:26.021740  437269 type.go:168] "Request Body" body=""
	I1014 19:46:26.021850  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:26.022219  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:26.521873  437269 type.go:168] "Request Body" body=""
	I1014 19:46:26.521956  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:26.522372  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:27.022036  437269 type.go:168] "Request Body" body=""
	I1014 19:46:27.022129  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:27.022489  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:27.522188  437269 type.go:168] "Request Body" body=""
	I1014 19:46:27.522279  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:27.522655  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:46:27.522745  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:46:28.021236  437269 type.go:168] "Request Body" body=""
	I1014 19:46:28.021317  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:28.021676  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:28.521949  437269 type.go:168] "Request Body" body=""
	I1014 19:46:28.522027  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:28.522409  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:29.022101  437269 type.go:168] "Request Body" body=""
	I1014 19:46:29.022190  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:29.022539  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:29.522171  437269 type.go:168] "Request Body" body=""
	I1014 19:46:29.522256  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:29.522639  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:30.021643  437269 type.go:168] "Request Body" body=""
	I1014 19:46:30.021778  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:30.022144  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:46:30.022208  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:46:30.521811  437269 type.go:168] "Request Body" body=""
	I1014 19:46:30.521894  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:30.522289  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:31.022066  437269 type.go:168] "Request Body" body=""
	I1014 19:46:31.022164  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:31.022558  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:31.522208  437269 type.go:168] "Request Body" body=""
	I1014 19:46:31.522295  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:31.522719  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:32.021314  437269 type.go:168] "Request Body" body=""
	I1014 19:46:32.021414  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:32.021832  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:32.521364  437269 type.go:168] "Request Body" body=""
	I1014 19:46:32.521461  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:32.521854  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:46:32.521920  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:46:33.021401  437269 type.go:168] "Request Body" body=""
	I1014 19:46:33.021513  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:33.022010  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:33.521545  437269 type.go:168] "Request Body" body=""
	I1014 19:46:33.521653  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:33.522075  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:34.021736  437269 type.go:168] "Request Body" body=""
	I1014 19:46:34.022027  437269 node_ready.go:38] duration metric: took 6m0.00093705s for node "functional-744288" to be "Ready" ...
	I1014 19:46:34.025220  437269 out.go:203] 
	W1014 19:46:34.026860  437269 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1014 19:46:34.026878  437269 out.go:285] * 
	W1014 19:46:34.028574  437269 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 19:46:34.030019  437269 out.go:203] 
	
	
	==> CRI-O <==
	Oct 14 19:46:44 functional-744288 crio[2959]: time="2025-10-14T19:46:44.416602802Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=c15e9887-9828-442c-b32f-b9922d8e40ac name=/runtime.v1.ImageService/ImageStatus
	Oct 14 19:46:44 functional-744288 crio[2959]: time="2025-10-14T19:46:44.729591696Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=b62d124f-6584-4711-88c1-0b165828185a name=/runtime.v1.ImageService/ImageStatus
	Oct 14 19:46:44 functional-744288 crio[2959]: time="2025-10-14T19:46:44.729772767Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=b62d124f-6584-4711-88c1-0b165828185a name=/runtime.v1.ImageService/ImageStatus
	Oct 14 19:46:44 functional-744288 crio[2959]: time="2025-10-14T19:46:44.729820387Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=b62d124f-6584-4711-88c1-0b165828185a name=/runtime.v1.ImageService/ImageStatus
	Oct 14 19:46:45 functional-744288 crio[2959]: time="2025-10-14T19:46:45.307714844Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=053772f9-08c0-4525-84ce-7a6d7953be6c name=/runtime.v1.ImageService/ImageStatus
	Oct 14 19:46:45 functional-744288 crio[2959]: time="2025-10-14T19:46:45.307875021Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=053772f9-08c0-4525-84ce-7a6d7953be6c name=/runtime.v1.ImageService/ImageStatus
	Oct 14 19:46:45 functional-744288 crio[2959]: time="2025-10-14T19:46:45.307909073Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=053772f9-08c0-4525-84ce-7a6d7953be6c name=/runtime.v1.ImageService/ImageStatus
	Oct 14 19:46:45 functional-744288 crio[2959]: time="2025-10-14T19:46:45.335249563Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=c9faeb66-8800-434d-93d6-9b537b9fb0f1 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 19:46:45 functional-744288 crio[2959]: time="2025-10-14T19:46:45.335403796Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=c9faeb66-8800-434d-93d6-9b537b9fb0f1 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 19:46:45 functional-744288 crio[2959]: time="2025-10-14T19:46:45.33544857Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=c9faeb66-8800-434d-93d6-9b537b9fb0f1 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 19:46:45 functional-744288 crio[2959]: time="2025-10-14T19:46:45.35998157Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=f91576cd-e278-41c5-a76f-8db57bc77203 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 19:46:45 functional-744288 crio[2959]: time="2025-10-14T19:46:45.360127865Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=f91576cd-e278-41c5-a76f-8db57bc77203 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 19:46:45 functional-744288 crio[2959]: time="2025-10-14T19:46:45.360206948Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=f91576cd-e278-41c5-a76f-8db57bc77203 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 19:46:45 functional-744288 crio[2959]: time="2025-10-14T19:46:45.830799939Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=453726c1-6f81-486c-90fa-d6a5f8819591 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 19:46:45 functional-744288 crio[2959]: time="2025-10-14T19:46:45.837311462Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=87012645-7107-490b-870d-45e35f2ed8d5 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 19:46:45 functional-744288 crio[2959]: time="2025-10-14T19:46:45.838266924Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=0fdb39c3-2596-44ea-be9f-d601f941db0b name=/runtime.v1.ImageService/ImageStatus
	Oct 14 19:46:45 functional-744288 crio[2959]: time="2025-10-14T19:46:45.839296292Z" level=info msg="Creating container: kube-system/kube-apiserver-functional-744288/kube-apiserver" id=529f00b9-a507-4375-94c1-f6f8ef86c2c5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:46:45 functional-744288 crio[2959]: time="2025-10-14T19:46:45.839521325Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 19:46:45 functional-744288 crio[2959]: time="2025-10-14T19:46:45.842995952Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 19:46:45 functional-744288 crio[2959]: time="2025-10-14T19:46:45.843406794Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 19:46:45 functional-744288 crio[2959]: time="2025-10-14T19:46:45.861346546Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=529f00b9-a507-4375-94c1-f6f8ef86c2c5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:46:45 functional-744288 crio[2959]: time="2025-10-14T19:46:45.862775557Z" level=info msg="createCtr: deleting container ID dad5edfe79e46b3e27de965fa552932dff803925c49e7b849ee52bdcdc897a09 from idIndex" id=529f00b9-a507-4375-94c1-f6f8ef86c2c5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:46:45 functional-744288 crio[2959]: time="2025-10-14T19:46:45.862817514Z" level=info msg="createCtr: removing container dad5edfe79e46b3e27de965fa552932dff803925c49e7b849ee52bdcdc897a09" id=529f00b9-a507-4375-94c1-f6f8ef86c2c5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:46:45 functional-744288 crio[2959]: time="2025-10-14T19:46:45.862859434Z" level=info msg="createCtr: deleting container dad5edfe79e46b3e27de965fa552932dff803925c49e7b849ee52bdcdc897a09 from storage" id=529f00b9-a507-4375-94c1-f6f8ef86c2c5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:46:45 functional-744288 crio[2959]: time="2025-10-14T19:46:45.864956682Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-functional-744288_kube-system_7dacb23619ff0889511bcb2e81339e77_0" id=529f00b9-a507-4375-94c1-f6f8ef86c2c5 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:46:47.321351    5302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:46:47.321959    5302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:46:47.323500    5302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:46:47.324041    5302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:46:47.325448    5302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 91 42 a8 46 f5 08 06
	[ +33.952077] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff aa 17 60 38 7c 63 08 06
	[  +0.961658] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ba 77 81 98 05 a1 08 06
	[  +0.043334] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 8a 2f 57 55 4c d7 08 06
	[  +4.681081] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f6 ac 51 a4 b2 5a 08 06
	[Oct14 19:04] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e6 d1 de e1 85 25 08 06
	[  +0.899784] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ae f2 fe 51 3d 95 08 06
	[  +0.040749] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 82 c6 36 89 b8 4a 08 06
	[  +4.162603] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c6 62 d5 5c 0e ef 08 06
	[ +30.801371] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 06 ae e5 28 c7 39 08 06
	[  +0.817897] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ae 5e 5e 90 62 1a 08 06
	[  +0.046959] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 96 8f 1b bd b9 9f 08 06
	[Oct14 19:05] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 56 59 c3 7c db c4 08 06
	
	
	==> kernel <==
	 19:46:47 up  2:29,  0 user,  load average: 0.17, 0.08, 2.24
	Linux functional-744288 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 14 19:46:38 functional-744288 kubelet[1809]:  > podSandboxID="e8186070b2ac7bccf45cf53cdedb42b8128ae6650737da34ded6f3d9a5f75310"
	Oct 14 19:46:38 functional-744288 kubelet[1809]: E1014 19:46:38.877162    1809 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 14 19:46:38 functional-744288 kubelet[1809]:         container kube-controller-manager start failed in pod kube-controller-manager-functional-744288_kube-system(b1fd55382fcf5a735f17d7c6c4ddad91): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 14 19:46:38 functional-744288 kubelet[1809]:  > logger="UnhandledError"
	Oct 14 19:46:38 functional-744288 kubelet[1809]: E1014 19:46:38.878336    1809 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-functional-744288" podUID="b1fd55382fcf5a735f17d7c6c4ddad91"
	Oct 14 19:46:41 functional-744288 kubelet[1809]: E1014 19:46:41.836910    1809 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-744288\" not found" node="functional-744288"
	Oct 14 19:46:41 functional-744288 kubelet[1809]: E1014 19:46:41.865256    1809 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 14 19:46:41 functional-744288 kubelet[1809]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 14 19:46:41 functional-744288 kubelet[1809]:  > podSandboxID="de75312ccca355aabaabb18a5eb1e6d7a7e4d5b3fb088ce1c5eb28a39d567355"
	Oct 14 19:46:41 functional-744288 kubelet[1809]: E1014 19:46:41.865384    1809 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 14 19:46:41 functional-744288 kubelet[1809]:         container etcd start failed in pod etcd-functional-744288_kube-system(07f65d41bdafe0b0f1a2009eadad0a38): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 14 19:46:41 functional-744288 kubelet[1809]:  > logger="UnhandledError"
	Oct 14 19:46:41 functional-744288 kubelet[1809]: E1014 19:46:41.865426    1809 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-functional-744288" podUID="07f65d41bdafe0b0f1a2009eadad0a38"
	Oct 14 19:46:42 functional-744288 kubelet[1809]: E1014 19:46:42.518626    1809 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-744288?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Oct 14 19:46:42 functional-744288 kubelet[1809]: I1014 19:46:42.743900    1809 kubelet_node_status.go:75] "Attempting to register node" node="functional-744288"
	Oct 14 19:46:42 functional-744288 kubelet[1809]: E1014 19:46:42.744338    1809 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-744288"
	Oct 14 19:46:45 functional-744288 kubelet[1809]: E1014 19:46:45.836842    1809 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-744288\" not found" node="functional-744288"
	Oct 14 19:46:45 functional-744288 kubelet[1809]: E1014 19:46:45.865300    1809 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 14 19:46:45 functional-744288 kubelet[1809]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 14 19:46:45 functional-744288 kubelet[1809]:  > podSandboxID="d501fdff2b92902ecd1a22b235a50d225f771b04701776d8a1bb0e78b9481d1c"
	Oct 14 19:46:45 functional-744288 kubelet[1809]: E1014 19:46:45.865414    1809 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 14 19:46:45 functional-744288 kubelet[1809]:         container kube-apiserver start failed in pod kube-apiserver-functional-744288_kube-system(7dacb23619ff0889511bcb2e81339e77): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 14 19:46:45 functional-744288 kubelet[1809]:  > logger="UnhandledError"
	Oct 14 19:46:45 functional-744288 kubelet[1809]: E1014 19:46:45.865451    1809 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-functional-744288" podUID="7dacb23619ff0889511bcb2e81339e77"
	Oct 14 19:46:47 functional-744288 kubelet[1809]: E1014 19:46:47.102630    1809 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://192.168.49.2:8441/api/v1/namespaces/default/events/functional-744288.186e72ac19058e88\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-744288.186e72ac19058e88  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-744288,UID:functional-744288,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-744288 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-744288,},FirstTimestamp:2025-10-14 19:36:27.828178568 +0000 UTC m=+0.685163688,LastTimestamp:2025-10-14 19:36:27.829543993 +0000 UTC m=+0.686529115,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,Reporting
Instance:functional-744288,}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-744288 -n functional-744288
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-744288 -n functional-744288: exit status 2 (313.731719ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-744288" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmd (2.20s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (2.3s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-744288 get pods
functional_test.go:756: (dbg) Non-zero exit: out/kubectl --context functional-744288 get pods: exit status 1 (112.645892ms)

                                                
                                                
** stderr ** 
	E1014 19:46:48.254019  443186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1014 19:46:48.254379  443186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1014 19:46:48.255860  443186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1014 19:46:48.256234  443186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1014 19:46:48.257709  443186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:759: failed to run kubectl directly. args "out/kubectl --context functional-744288 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmdDirectly]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmdDirectly]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-744288
helpers_test.go:243: (dbg) docker inspect functional-744288:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ca55a7aa3ac1a055c5c96669019f558795a313505e680734901ed4465642a23a",
	        "Created": "2025-10-14T19:32:11.700856501Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 432350,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-14T19:32:11.736879267Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/ca55a7aa3ac1a055c5c96669019f558795a313505e680734901ed4465642a23a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ca55a7aa3ac1a055c5c96669019f558795a313505e680734901ed4465642a23a/hostname",
	        "HostsPath": "/var/lib/docker/containers/ca55a7aa3ac1a055c5c96669019f558795a313505e680734901ed4465642a23a/hosts",
	        "LogPath": "/var/lib/docker/containers/ca55a7aa3ac1a055c5c96669019f558795a313505e680734901ed4465642a23a/ca55a7aa3ac1a055c5c96669019f558795a313505e680734901ed4465642a23a-json.log",
	        "Name": "/functional-744288",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-744288:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-744288",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ca55a7aa3ac1a055c5c96669019f558795a313505e680734901ed4465642a23a",
	                "LowerDir": "/var/lib/docker/overlay2/7474e2653fad8f96d0b2edb2947dba52f432e8f6324d88224718eafd377e5e8b-init/diff:/var/lib/docker/overlay2/51cca15559205c73df9571f03495ed8a29f085405673af5fef2c1ba7c695d8af/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7474e2653fad8f96d0b2edb2947dba52f432e8f6324d88224718eafd377e5e8b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7474e2653fad8f96d0b2edb2947dba52f432e8f6324d88224718eafd377e5e8b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7474e2653fad8f96d0b2edb2947dba52f432e8f6324d88224718eafd377e5e8b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-744288",
	                "Source": "/var/lib/docker/volumes/functional-744288/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-744288",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-744288",
	                "name.minikube.sigs.k8s.io": "functional-744288",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "697bcb3f8caffcc21274484944886a08e42751728789c447339536a0c178eab6",
	            "SandboxKey": "/var/run/docker/netns/697bcb3f8caf",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32898"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32899"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32902"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32900"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32901"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-744288": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b6:cd:04:2d:45:3c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a03a168f8787c5e4cc68b4fb2601645260a265eddce7ef101976f25cd32bdabb",
	                    "EndpointID": "c7c49faacc3b14ce4dd3f84ad70993167a6a8b1490739ae12d7ca1980d176ee7",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-744288",
	                        "ca55a7aa3ac1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-744288 -n functional-744288
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-744288 -n functional-744288: exit status 2 (300.770732ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctional/serial/MinikubeKubectlCmdDirectly FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmdDirectly]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-744288 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-744288 logs -n 25: (1.020327555s)
helpers_test.go:260: TestFunctional/serial/MinikubeKubectlCmdDirectly logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                     ARGS                                                      │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ pause   │ nospam-442016 --log_dir /tmp/nospam-442016 pause                                                              │ nospam-442016     │ jenkins │ v1.37.0 │ 14 Oct 25 19:31 UTC │ 14 Oct 25 19:31 UTC │
	│ unpause │ nospam-442016 --log_dir /tmp/nospam-442016 unpause                                                            │ nospam-442016     │ jenkins │ v1.37.0 │ 14 Oct 25 19:31 UTC │ 14 Oct 25 19:31 UTC │
	│ unpause │ nospam-442016 --log_dir /tmp/nospam-442016 unpause                                                            │ nospam-442016     │ jenkins │ v1.37.0 │ 14 Oct 25 19:31 UTC │ 14 Oct 25 19:31 UTC │
	│ unpause │ nospam-442016 --log_dir /tmp/nospam-442016 unpause                                                            │ nospam-442016     │ jenkins │ v1.37.0 │ 14 Oct 25 19:31 UTC │ 14 Oct 25 19:32 UTC │
	│ stop    │ nospam-442016 --log_dir /tmp/nospam-442016 stop                                                               │ nospam-442016     │ jenkins │ v1.37.0 │ 14 Oct 25 19:32 UTC │ 14 Oct 25 19:32 UTC │
	│ stop    │ nospam-442016 --log_dir /tmp/nospam-442016 stop                                                               │ nospam-442016     │ jenkins │ v1.37.0 │ 14 Oct 25 19:32 UTC │ 14 Oct 25 19:32 UTC │
	│ stop    │ nospam-442016 --log_dir /tmp/nospam-442016 stop                                                               │ nospam-442016     │ jenkins │ v1.37.0 │ 14 Oct 25 19:32 UTC │ 14 Oct 25 19:32 UTC │
	│ delete  │ -p nospam-442016                                                                                              │ nospam-442016     │ jenkins │ v1.37.0 │ 14 Oct 25 19:32 UTC │ 14 Oct 25 19:32 UTC │
	│ start   │ -p functional-744288 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:32 UTC │                     │
	│ start   │ -p functional-744288 --alsologtostderr -v=8                                                                   │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:40 UTC │                     │
	│ cache   │ functional-744288 cache add registry.k8s.io/pause:3.1                                                         │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:46 UTC │ 14 Oct 25 19:46 UTC │
	│ cache   │ functional-744288 cache add registry.k8s.io/pause:3.3                                                         │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:46 UTC │ 14 Oct 25 19:46 UTC │
	│ cache   │ functional-744288 cache add registry.k8s.io/pause:latest                                                      │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:46 UTC │ 14 Oct 25 19:46 UTC │
	│ cache   │ functional-744288 cache add minikube-local-cache-test:functional-744288                                       │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:46 UTC │ 14 Oct 25 19:46 UTC │
	│ cache   │ functional-744288 cache delete minikube-local-cache-test:functional-744288                                    │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:46 UTC │ 14 Oct 25 19:46 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                              │ minikube          │ jenkins │ v1.37.0 │ 14 Oct 25 19:46 UTC │ 14 Oct 25 19:46 UTC │
	│ cache   │ list                                                                                                          │ minikube          │ jenkins │ v1.37.0 │ 14 Oct 25 19:46 UTC │ 14 Oct 25 19:46 UTC │
	│ ssh     │ functional-744288 ssh sudo crictl images                                                                      │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:46 UTC │ 14 Oct 25 19:46 UTC │
	│ ssh     │ functional-744288 ssh sudo crictl rmi registry.k8s.io/pause:latest                                            │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:46 UTC │ 14 Oct 25 19:46 UTC │
	│ ssh     │ functional-744288 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                       │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:46 UTC │                     │
	│ cache   │ functional-744288 cache reload                                                                                │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:46 UTC │ 14 Oct 25 19:46 UTC │
	│ ssh     │ functional-744288 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                       │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:46 UTC │ 14 Oct 25 19:46 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                              │ minikube          │ jenkins │ v1.37.0 │ 14 Oct 25 19:46 UTC │ 14 Oct 25 19:46 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                           │ minikube          │ jenkins │ v1.37.0 │ 14 Oct 25 19:46 UTC │ 14 Oct 25 19:46 UTC │
	│ kubectl │ functional-744288 kubectl -- --context functional-744288 get pods                                             │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:46 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/14 19:40:29
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1014 19:40:29.999204  437269 out.go:360] Setting OutFile to fd 1 ...
	I1014 19:40:29.999451  437269 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 19:40:29.999459  437269 out.go:374] Setting ErrFile to fd 2...
	I1014 19:40:29.999463  437269 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 19:40:29.999664  437269 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-413763/.minikube/bin
	I1014 19:40:30.000162  437269 out.go:368] Setting JSON to false
	I1014 19:40:30.001140  437269 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":8576,"bootTime":1760462254,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1014 19:40:30.001253  437269 start.go:141] virtualization: kvm guest
	I1014 19:40:30.003929  437269 out.go:179] * [functional-744288] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1014 19:40:30.005394  437269 out.go:179]   - MINIKUBE_LOCATION=21409
	I1014 19:40:30.005413  437269 notify.go:220] Checking for updates...
	I1014 19:40:30.008578  437269 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 19:40:30.009922  437269 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-413763/kubeconfig
	I1014 19:40:30.011325  437269 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-413763/.minikube
	I1014 19:40:30.012721  437269 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1014 19:40:30.014074  437269 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 19:40:30.015738  437269 config.go:182] Loaded profile config "functional-744288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 19:40:30.015851  437269 driver.go:421] Setting default libvirt URI to qemu:///system
	I1014 19:40:30.041344  437269 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1014 19:40:30.041571  437269 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 19:40:30.106855  437269 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-14 19:40:30.095983875 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1014 19:40:30.106976  437269 docker.go:318] overlay module found
	I1014 19:40:30.108953  437269 out.go:179] * Using the docker driver based on existing profile
	I1014 19:40:30.110337  437269 start.go:305] selected driver: docker
	I1014 19:40:30.110363  437269 start.go:925] validating driver "docker" against &{Name:functional-744288 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-744288 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 19:40:30.110446  437269 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 19:40:30.110529  437269 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 19:40:30.176521  437269 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-14 19:40:30.165510899 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1014 19:40:30.177154  437269 cni.go:84] Creating CNI manager for ""
	I1014 19:40:30.177215  437269 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1014 19:40:30.177273  437269 start.go:349] cluster config:
	{Name:functional-744288 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-744288 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 19:40:30.179329  437269 out.go:179] * Starting "functional-744288" primary control-plane node in "functional-744288" cluster
	I1014 19:40:30.180795  437269 cache.go:123] Beginning downloading kic base image for docker with crio
	I1014 19:40:30.182356  437269 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1014 19:40:30.183701  437269 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 19:40:30.183742  437269 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-413763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1014 19:40:30.183752  437269 cache.go:58] Caching tarball of preloaded images
	I1014 19:40:30.183799  437269 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1014 19:40:30.183863  437269 preload.go:233] Found /home/jenkins/minikube-integration/21409-413763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1014 19:40:30.183877  437269 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1014 19:40:30.183979  437269 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/config.json ...
	I1014 19:40:30.204077  437269 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1014 19:40:30.204098  437269 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1014 19:40:30.204114  437269 cache.go:232] Successfully downloaded all kic artifacts
	I1014 19:40:30.204155  437269 start.go:360] acquireMachinesLock for functional-744288: {Name:mk27c3a9a4edec1c99a109c410361619ff35ec14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 19:40:30.204220  437269 start.go:364] duration metric: took 47.096µs to acquireMachinesLock for "functional-744288"
	I1014 19:40:30.204240  437269 start.go:96] Skipping create...Using existing machine configuration
	I1014 19:40:30.204245  437269 fix.go:54] fixHost starting: 
	I1014 19:40:30.204447  437269 cli_runner.go:164] Run: docker container inspect functional-744288 --format={{.State.Status}}
	I1014 19:40:30.222380  437269 fix.go:112] recreateIfNeeded on functional-744288: state=Running err=<nil>
	W1014 19:40:30.222430  437269 fix.go:138] unexpected machine state, will restart: <nil>
	I1014 19:40:30.224794  437269 out.go:252] * Updating the running docker "functional-744288" container ...
	I1014 19:40:30.224832  437269 machine.go:93] provisionDockerMachine start ...
	I1014 19:40:30.224915  437269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-744288
	I1014 19:40:30.243631  437269 main.go:141] libmachine: Using SSH client type: native
	I1014 19:40:30.243897  437269 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32898 <nil> <nil>}
	I1014 19:40:30.243914  437269 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 19:40:30.392088  437269 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-744288
	
	I1014 19:40:30.392121  437269 ubuntu.go:182] provisioning hostname "functional-744288"
	I1014 19:40:30.392200  437269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-744288
	I1014 19:40:30.410333  437269 main.go:141] libmachine: Using SSH client type: native
	I1014 19:40:30.410549  437269 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32898 <nil> <nil>}
	I1014 19:40:30.410563  437269 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-744288 && echo "functional-744288" | sudo tee /etc/hostname
	I1014 19:40:30.567306  437269 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-744288
	
	I1014 19:40:30.567398  437269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-744288
	I1014 19:40:30.585534  437269 main.go:141] libmachine: Using SSH client type: native
	I1014 19:40:30.585774  437269 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32898 <nil> <nil>}
	I1014 19:40:30.585794  437269 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-744288' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-744288/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-744288' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 19:40:30.733740  437269 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 19:40:30.733790  437269 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-413763/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-413763/.minikube}
	I1014 19:40:30.733813  437269 ubuntu.go:190] setting up certificates
	I1014 19:40:30.733825  437269 provision.go:84] configureAuth start
	I1014 19:40:30.733878  437269 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-744288
	I1014 19:40:30.751946  437269 provision.go:143] copyHostCerts
	I1014 19:40:30.751989  437269 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem
	I1014 19:40:30.752023  437269 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem, removing ...
	I1014 19:40:30.752048  437269 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem
	I1014 19:40:30.752133  437269 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem (1078 bytes)
	I1014 19:40:30.752237  437269 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem
	I1014 19:40:30.752267  437269 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem, removing ...
	I1014 19:40:30.752278  437269 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem
	I1014 19:40:30.752320  437269 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem (1123 bytes)
	I1014 19:40:30.752387  437269 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem
	I1014 19:40:30.752412  437269 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem, removing ...
	I1014 19:40:30.752422  437269 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem
	I1014 19:40:30.752463  437269 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem (1675 bytes)
	I1014 19:40:30.752709  437269 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca-key.pem org=jenkins.functional-744288 san=[127.0.0.1 192.168.49.2 functional-744288 localhost minikube]
	I1014 19:40:31.076864  437269 provision.go:177] copyRemoteCerts
	I1014 19:40:31.076930  437269 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 19:40:31.076971  437269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-744288
	I1014 19:40:31.095322  437269 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/functional-744288/id_rsa Username:docker}
	I1014 19:40:31.200396  437269 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1014 19:40:31.200473  437269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 19:40:31.218084  437269 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1014 19:40:31.218140  437269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1014 19:40:31.235905  437269 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1014 19:40:31.235974  437269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1014 19:40:31.253074  437269 provision.go:87] duration metric: took 519.232689ms to configureAuth
	I1014 19:40:31.253110  437269 ubuntu.go:206] setting minikube options for container-runtime
	I1014 19:40:31.253264  437269 config.go:182] Loaded profile config "functional-744288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 19:40:31.253357  437269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-744288
	I1014 19:40:31.271451  437269 main.go:141] libmachine: Using SSH client type: native
	I1014 19:40:31.271661  437269 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32898 <nil> <nil>}
	I1014 19:40:31.271677  437269 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 19:40:31.540521  437269 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 19:40:31.540549  437269 machine.go:96] duration metric: took 1.315709373s to provisionDockerMachine
	I1014 19:40:31.540561  437269 start.go:293] postStartSetup for "functional-744288" (driver="docker")
	I1014 19:40:31.540571  437269 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 19:40:31.540628  437269 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 19:40:31.540669  437269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-744288
	I1014 19:40:31.559297  437269 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/functional-744288/id_rsa Username:docker}
	I1014 19:40:31.665251  437269 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 19:40:31.669234  437269 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1014 19:40:31.669258  437269 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1014 19:40:31.669267  437269 command_runner.go:130] > VERSION_ID="12"
	I1014 19:40:31.669270  437269 command_runner.go:130] > VERSION="12 (bookworm)"
	I1014 19:40:31.669276  437269 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1014 19:40:31.669279  437269 command_runner.go:130] > ID=debian
	I1014 19:40:31.669283  437269 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1014 19:40:31.669288  437269 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1014 19:40:31.669293  437269 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1014 19:40:31.669341  437269 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1014 19:40:31.669359  437269 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1014 19:40:31.669371  437269 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-413763/.minikube/addons for local assets ...
	I1014 19:40:31.669425  437269 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-413763/.minikube/files for local assets ...
	I1014 19:40:31.669510  437269 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem -> 4173732.pem in /etc/ssl/certs
	I1014 19:40:31.669525  437269 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem -> /etc/ssl/certs/4173732.pem
	I1014 19:40:31.669592  437269 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/test/nested/copy/417373/hosts -> hosts in /etc/test/nested/copy/417373
	I1014 19:40:31.669600  437269 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/test/nested/copy/417373/hosts -> /etc/test/nested/copy/417373/hosts
	I1014 19:40:31.669633  437269 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/417373
	I1014 19:40:31.677988  437269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem --> /etc/ssl/certs/4173732.pem (1708 bytes)
	I1014 19:40:31.696543  437269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/test/nested/copy/417373/hosts --> /etc/test/nested/copy/417373/hosts (40 bytes)
	I1014 19:40:31.715275  437269 start.go:296] duration metric: took 174.687158ms for postStartSetup
	I1014 19:40:31.715383  437269 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1014 19:40:31.715428  437269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-744288
	I1014 19:40:31.734376  437269 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/functional-744288/id_rsa Username:docker}
	I1014 19:40:31.836456  437269 command_runner.go:130] > 39%
	I1014 19:40:31.836544  437269 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1014 19:40:31.841513  437269 command_runner.go:130] > 178G
	I1014 19:40:31.841552  437269 fix.go:56] duration metric: took 1.637302821s for fixHost
	I1014 19:40:31.841566  437269 start.go:83] releasing machines lock for "functional-744288", held for 1.637335022s
	I1014 19:40:31.841633  437269 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-744288
	I1014 19:40:31.859002  437269 ssh_runner.go:195] Run: cat /version.json
	I1014 19:40:31.859036  437269 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 19:40:31.859053  437269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-744288
	I1014 19:40:31.859093  437269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-744288
	I1014 19:40:31.877314  437269 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/functional-744288/id_rsa Username:docker}
	I1014 19:40:31.877547  437269 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/functional-744288/id_rsa Username:docker}
	I1014 19:40:31.978415  437269 command_runner.go:130] > {"iso_version": "v1.37.0-1758198818-20370", "kicbase_version": "v0.0.48-1759745255-21703", "minikube_version": "v1.37.0", "commit": "a51fe4b7ffc88febd8814e8831f38772e976d097"}
	I1014 19:40:31.978583  437269 ssh_runner.go:195] Run: systemctl --version
	I1014 19:40:32.030433  437269 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1014 19:40:32.032548  437269 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1014 19:40:32.032581  437269 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1014 19:40:32.032653  437269 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 19:40:32.071124  437269 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1014 19:40:32.075797  437269 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1014 19:40:32.076143  437269 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 19:40:32.076213  437269 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 19:40:32.084774  437269 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1014 19:40:32.084802  437269 start.go:495] detecting cgroup driver to use...
	I1014 19:40:32.084841  437269 detect.go:190] detected "systemd" cgroup driver on host os
	I1014 19:40:32.084885  437269 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 19:40:32.100807  437269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 19:40:32.114918  437269 docker.go:218] disabling cri-docker service (if available) ...
	I1014 19:40:32.115001  437269 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 19:40:32.131082  437269 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 19:40:32.145731  437269 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 19:40:32.234963  437269 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 19:40:32.329593  437269 docker.go:234] disabling docker service ...
	I1014 19:40:32.329671  437269 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 19:40:32.344729  437269 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 19:40:32.357712  437269 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 19:40:32.445038  437269 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 19:40:32.534134  437269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 19:40:32.547615  437269 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 19:40:32.562780  437269 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1014 19:40:32.562835  437269 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1014 19:40:32.562884  437269 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 19:40:32.572580  437269 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1014 19:40:32.572655  437269 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 19:40:32.581715  437269 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 19:40:32.590624  437269 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 19:40:32.599492  437269 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 19:40:32.607979  437269 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 19:40:32.617026  437269 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 19:40:32.625607  437269 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 19:40:32.634661  437269 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 19:40:32.642022  437269 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1014 19:40:32.642101  437269 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 19:40:32.649948  437269 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 19:40:32.737827  437269 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 19:40:32.854779  437269 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 19:40:32.854851  437269 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 19:40:32.859353  437269 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1014 19:40:32.859376  437269 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1014 19:40:32.859382  437269 command_runner.go:130] > Device: 0,59	Inode: 3887        Links: 1
	I1014 19:40:32.859389  437269 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1014 19:40:32.859394  437269 command_runner.go:130] > Access: 2025-10-14 19:40:32.837516724 +0000
	I1014 19:40:32.859399  437269 command_runner.go:130] > Modify: 2025-10-14 19:40:32.837516724 +0000
	I1014 19:40:32.859403  437269 command_runner.go:130] > Change: 2025-10-14 19:40:32.837516724 +0000
	I1014 19:40:32.859408  437269 command_runner.go:130] >  Birth: 2025-10-14 19:40:32.837516724 +0000
	I1014 19:40:32.859438  437269 start.go:563] Will wait 60s for crictl version
	I1014 19:40:32.859485  437269 ssh_runner.go:195] Run: which crictl
	I1014 19:40:32.863222  437269 command_runner.go:130] > /usr/local/bin/crictl
	I1014 19:40:32.863312  437269 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1014 19:40:32.889462  437269 command_runner.go:130] > Version:  0.1.0
	I1014 19:40:32.889482  437269 command_runner.go:130] > RuntimeName:  cri-o
	I1014 19:40:32.889486  437269 command_runner.go:130] > RuntimeVersion:  1.34.1
	I1014 19:40:32.889490  437269 command_runner.go:130] > RuntimeApiVersion:  v1
	I1014 19:40:32.889505  437269 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1014 19:40:32.889559  437269 ssh_runner.go:195] Run: crio --version
	I1014 19:40:32.920224  437269 command_runner.go:130] > crio version 1.34.1
	I1014 19:40:32.920251  437269 command_runner.go:130] >    GitCommit:      8e14bff4153ba033f12ed3ffa3cadaca5425b313
	I1014 19:40:32.920258  437269 command_runner.go:130] >    GitCommitDate:  2025-10-01T13:04:13Z
	I1014 19:40:32.920266  437269 command_runner.go:130] >    GitTreeState:   dirty
	I1014 19:40:32.920279  437269 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1014 19:40:32.920285  437269 command_runner.go:130] >    GoVersion:      go1.24.6
	I1014 19:40:32.920291  437269 command_runner.go:130] >    Compiler:       gc
	I1014 19:40:32.920303  437269 command_runner.go:130] >    Platform:       linux/amd64
	I1014 19:40:32.920312  437269 command_runner.go:130] >    Linkmode:       static
	I1014 19:40:32.920322  437269 command_runner.go:130] >    BuildTags:
	I1014 19:40:32.920332  437269 command_runner.go:130] >      static
	I1014 19:40:32.920340  437269 command_runner.go:130] >      netgo
	I1014 19:40:32.920347  437269 command_runner.go:130] >      osusergo
	I1014 19:40:32.920354  437269 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1014 19:40:32.920358  437269 command_runner.go:130] >      seccomp
	I1014 19:40:32.920361  437269 command_runner.go:130] >      apparmor
	I1014 19:40:32.920367  437269 command_runner.go:130] >      selinux
	I1014 19:40:32.920371  437269 command_runner.go:130] >    LDFlags:          unknown
	I1014 19:40:32.920379  437269 command_runner.go:130] >    SeccompEnabled:   true
	I1014 19:40:32.920383  437269 command_runner.go:130] >    AppArmorEnabled:  false
	I1014 19:40:32.920453  437269 ssh_runner.go:195] Run: crio --version
	I1014 19:40:32.949467  437269 command_runner.go:130] > crio version 1.34.1
	I1014 19:40:32.949490  437269 command_runner.go:130] >    GitCommit:      8e14bff4153ba033f12ed3ffa3cadaca5425b313
	I1014 19:40:32.949495  437269 command_runner.go:130] >    GitCommitDate:  2025-10-01T13:04:13Z
	I1014 19:40:32.949499  437269 command_runner.go:130] >    GitTreeState:   dirty
	I1014 19:40:32.949504  437269 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1014 19:40:32.949508  437269 command_runner.go:130] >    GoVersion:      go1.24.6
	I1014 19:40:32.949514  437269 command_runner.go:130] >    Compiler:       gc
	I1014 19:40:32.949525  437269 command_runner.go:130] >    Platform:       linux/amd64
	I1014 19:40:32.949534  437269 command_runner.go:130] >    Linkmode:       static
	I1014 19:40:32.949540  437269 command_runner.go:130] >    BuildTags:
	I1014 19:40:32.949546  437269 command_runner.go:130] >      static
	I1014 19:40:32.949555  437269 command_runner.go:130] >      netgo
	I1014 19:40:32.949560  437269 command_runner.go:130] >      osusergo
	I1014 19:40:32.949567  437269 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1014 19:40:32.949571  437269 command_runner.go:130] >      seccomp
	I1014 19:40:32.949576  437269 command_runner.go:130] >      apparmor
	I1014 19:40:32.949582  437269 command_runner.go:130] >      selinux
	I1014 19:40:32.949588  437269 command_runner.go:130] >    LDFlags:          unknown
	I1014 19:40:32.949592  437269 command_runner.go:130] >    SeccompEnabled:   true
	I1014 19:40:32.949599  437269 command_runner.go:130] >    AppArmorEnabled:  false
	I1014 19:40:32.952722  437269 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1014 19:40:32.953989  437269 cli_runner.go:164] Run: docker network inspect functional-744288 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1014 19:40:32.971672  437269 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1014 19:40:32.976098  437269 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1014 19:40:32.976178  437269 kubeadm.go:883] updating cluster {Name:functional-744288 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-744288 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 19:40:32.976267  437269 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 19:40:32.976332  437269 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 19:40:33.006155  437269 command_runner.go:130] > {
	I1014 19:40:33.006181  437269 command_runner.go:130] >   "images":  [
	I1014 19:40:33.006186  437269 command_runner.go:130] >     {
	I1014 19:40:33.006194  437269 command_runner.go:130] >       "id":  "409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c",
	I1014 19:40:33.006200  437269 command_runner.go:130] >       "repoTags":  [
	I1014 19:40:33.006209  437269 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1014 19:40:33.006213  437269 command_runner.go:130] >       ],
	I1014 19:40:33.006218  437269 command_runner.go:130] >       "repoDigests":  [
	I1014 19:40:33.006232  437269 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1014 19:40:33.006248  437269 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"
	I1014 19:40:33.006257  437269 command_runner.go:130] >       ],
	I1014 19:40:33.006270  437269 command_runner.go:130] >       "size":  "109379124",
	I1014 19:40:33.006276  437269 command_runner.go:130] >       "username":  "",
	I1014 19:40:33.006281  437269 command_runner.go:130] >       "pinned":  false
	I1014 19:40:33.006287  437269 command_runner.go:130] >     },
	I1014 19:40:33.006290  437269 command_runner.go:130] >     {
	I1014 19:40:33.006304  437269 command_runner.go:130] >       "id":  "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1014 19:40:33.006316  437269 command_runner.go:130] >       "repoTags":  [
	I1014 19:40:33.006324  437269 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1014 19:40:33.006330  437269 command_runner.go:130] >       ],
	I1014 19:40:33.006335  437269 command_runner.go:130] >       "repoDigests":  [
	I1014 19:40:33.006348  437269 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1014 19:40:33.006364  437269 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1014 19:40:33.006372  437269 command_runner.go:130] >       ],
	I1014 19:40:33.006379  437269 command_runner.go:130] >       "size":  "31470524",
	I1014 19:40:33.006388  437269 command_runner.go:130] >       "username":  "",
	I1014 19:40:33.006398  437269 command_runner.go:130] >       "pinned":  false
	I1014 19:40:33.006402  437269 command_runner.go:130] >     },
	I1014 19:40:33.006405  437269 command_runner.go:130] >     {
	I1014 19:40:33.006413  437269 command_runner.go:130] >       "id":  "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969",
	I1014 19:40:33.006422  437269 command_runner.go:130] >       "repoTags":  [
	I1014 19:40:33.006431  437269 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.12.1"
	I1014 19:40:33.006441  437269 command_runner.go:130] >       ],
	I1014 19:40:33.006448  437269 command_runner.go:130] >       "repoDigests":  [
	I1014 19:40:33.006463  437269 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998",
	I1014 19:40:33.006477  437269 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"
	I1014 19:40:33.006486  437269 command_runner.go:130] >       ],
	I1014 19:40:33.006496  437269 command_runner.go:130] >       "size":  "76103547",
	I1014 19:40:33.006505  437269 command_runner.go:130] >       "username":  "nonroot",
	I1014 19:40:33.006513  437269 command_runner.go:130] >       "pinned":  false
	I1014 19:40:33.006516  437269 command_runner.go:130] >     },
	I1014 19:40:33.006525  437269 command_runner.go:130] >     {
	I1014 19:40:33.006535  437269 command_runner.go:130] >       "id":  "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115",
	I1014 19:40:33.006545  437269 command_runner.go:130] >       "repoTags":  [
	I1014 19:40:33.006555  437269 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.4-0"
	I1014 19:40:33.006563  437269 command_runner.go:130] >       ],
	I1014 19:40:33.006570  437269 command_runner.go:130] >       "repoDigests":  [
	I1014 19:40:33.006584  437269 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f",
	I1014 19:40:33.006598  437269 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"
	I1014 19:40:33.006607  437269 command_runner.go:130] >       ],
	I1014 19:40:33.006615  437269 command_runner.go:130] >       "size":  "195976448",
	I1014 19:40:33.006619  437269 command_runner.go:130] >       "uid":  {
	I1014 19:40:33.006624  437269 command_runner.go:130] >         "value":  "0"
	I1014 19:40:33.006632  437269 command_runner.go:130] >       },
	I1014 19:40:33.006646  437269 command_runner.go:130] >       "username":  "",
	I1014 19:40:33.006657  437269 command_runner.go:130] >       "pinned":  false
	I1014 19:40:33.006667  437269 command_runner.go:130] >     },
	I1014 19:40:33.006675  437269 command_runner.go:130] >     {
	I1014 19:40:33.006689  437269 command_runner.go:130] >       "id":  "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97",
	I1014 19:40:33.006695  437269 command_runner.go:130] >       "repoTags":  [
	I1014 19:40:33.006707  437269 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.34.1"
	I1014 19:40:33.006714  437269 command_runner.go:130] >       ],
	I1014 19:40:33.006718  437269 command_runner.go:130] >       "repoDigests":  [
	I1014 19:40:33.006732  437269 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964",
	I1014 19:40:33.006748  437269 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"
	I1014 19:40:33.006767  437269 command_runner.go:130] >       ],
	I1014 19:40:33.006778  437269 command_runner.go:130] >       "size":  "89046001",
	I1014 19:40:33.006786  437269 command_runner.go:130] >       "uid":  {
	I1014 19:40:33.006795  437269 command_runner.go:130] >         "value":  "0"
	I1014 19:40:33.006803  437269 command_runner.go:130] >       },
	I1014 19:40:33.006809  437269 command_runner.go:130] >       "username":  "",
	I1014 19:40:33.006819  437269 command_runner.go:130] >       "pinned":  false
	I1014 19:40:33.006827  437269 command_runner.go:130] >     },
	I1014 19:40:33.006835  437269 command_runner.go:130] >     {
	I1014 19:40:33.006846  437269 command_runner.go:130] >       "id":  "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f",
	I1014 19:40:33.006855  437269 command_runner.go:130] >       "repoTags":  [
	I1014 19:40:33.006865  437269 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.34.1"
	I1014 19:40:33.006874  437269 command_runner.go:130] >       ],
	I1014 19:40:33.006884  437269 command_runner.go:130] >       "repoDigests":  [
	I1014 19:40:33.006899  437269 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89",
	I1014 19:40:33.006910  437269 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"
	I1014 19:40:33.006918  437269 command_runner.go:130] >       ],
	I1014 19:40:33.006926  437269 command_runner.go:130] >       "size":  "76004181",
	I1014 19:40:33.006935  437269 command_runner.go:130] >       "uid":  {
	I1014 19:40:33.006948  437269 command_runner.go:130] >         "value":  "0"
	I1014 19:40:33.006957  437269 command_runner.go:130] >       },
	I1014 19:40:33.006967  437269 command_runner.go:130] >       "username":  "",
	I1014 19:40:33.006976  437269 command_runner.go:130] >       "pinned":  false
	I1014 19:40:33.006985  437269 command_runner.go:130] >     },
	I1014 19:40:33.006993  437269 command_runner.go:130] >     {
	I1014 19:40:33.007004  437269 command_runner.go:130] >       "id":  "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7",
	I1014 19:40:33.007011  437269 command_runner.go:130] >       "repoTags":  [
	I1014 19:40:33.007019  437269 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.34.1"
	I1014 19:40:33.007027  437269 command_runner.go:130] >       ],
	I1014 19:40:33.007037  437269 command_runner.go:130] >       "repoDigests":  [
	I1014 19:40:33.007052  437269 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a",
	I1014 19:40:33.007067  437269 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"
	I1014 19:40:33.007076  437269 command_runner.go:130] >       ],
	I1014 19:40:33.007084  437269 command_runner.go:130] >       "size":  "73138073",
	I1014 19:40:33.007092  437269 command_runner.go:130] >       "username":  "",
	I1014 19:40:33.007095  437269 command_runner.go:130] >       "pinned":  false
	I1014 19:40:33.007103  437269 command_runner.go:130] >     },
	I1014 19:40:33.007109  437269 command_runner.go:130] >     {
	I1014 19:40:33.007123  437269 command_runner.go:130] >       "id":  "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813",
	I1014 19:40:33.007132  437269 command_runner.go:130] >       "repoTags":  [
	I1014 19:40:33.007142  437269 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.34.1"
	I1014 19:40:33.007152  437269 command_runner.go:130] >       ],
	I1014 19:40:33.007162  437269 command_runner.go:130] >       "repoDigests":  [
	I1014 19:40:33.007175  437269 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31",
	I1014 19:40:33.007194  437269 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"
	I1014 19:40:33.007203  437269 command_runner.go:130] >       ],
	I1014 19:40:33.007213  437269 command_runner.go:130] >       "size":  "53844823",
	I1014 19:40:33.007220  437269 command_runner.go:130] >       "uid":  {
	I1014 19:40:33.007229  437269 command_runner.go:130] >         "value":  "0"
	I1014 19:40:33.007237  437269 command_runner.go:130] >       },
	I1014 19:40:33.007246  437269 command_runner.go:130] >       "username":  "",
	I1014 19:40:33.007253  437269 command_runner.go:130] >       "pinned":  false
	I1014 19:40:33.007260  437269 command_runner.go:130] >     },
	I1014 19:40:33.007266  437269 command_runner.go:130] >     {
	I1014 19:40:33.007278  437269 command_runner.go:130] >       "id":  "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f",
	I1014 19:40:33.007285  437269 command_runner.go:130] >       "repoTags":  [
	I1014 19:40:33.007290  437269 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1014 19:40:33.007298  437269 command_runner.go:130] >       ],
	I1014 19:40:33.007308  437269 command_runner.go:130] >       "repoDigests":  [
	I1014 19:40:33.007320  437269 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1014 19:40:33.007334  437269 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"
	I1014 19:40:33.007342  437269 command_runner.go:130] >       ],
	I1014 19:40:33.007351  437269 command_runner.go:130] >       "size":  "742092",
	I1014 19:40:33.007359  437269 command_runner.go:130] >       "uid":  {
	I1014 19:40:33.007370  437269 command_runner.go:130] >         "value":  "65535"
	I1014 19:40:33.007376  437269 command_runner.go:130] >       },
	I1014 19:40:33.007380  437269 command_runner.go:130] >       "username":  "",
	I1014 19:40:33.007387  437269 command_runner.go:130] >       "pinned":  true
	I1014 19:40:33.007393  437269 command_runner.go:130] >     }
	I1014 19:40:33.007401  437269 command_runner.go:130] >   ]
	I1014 19:40:33.007406  437269 command_runner.go:130] > }
	I1014 19:40:33.007590  437269 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 19:40:33.007603  437269 crio.go:433] Images already preloaded, skipping extraction
	I1014 19:40:33.007661  437269 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 19:40:33.032442  437269 command_runner.go:130] > {
	I1014 19:40:33.032462  437269 command_runner.go:130] >   "images":  [
	I1014 19:40:33.032466  437269 command_runner.go:130] >     {
	I1014 19:40:33.032478  437269 command_runner.go:130] >       "id":  "409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c",
	I1014 19:40:33.032485  437269 command_runner.go:130] >       "repoTags":  [
	I1014 19:40:33.032495  437269 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1014 19:40:33.032501  437269 command_runner.go:130] >       ],
	I1014 19:40:33.032508  437269 command_runner.go:130] >       "repoDigests":  [
	I1014 19:40:33.032519  437269 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1014 19:40:33.032527  437269 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"
	I1014 19:40:33.032534  437269 command_runner.go:130] >       ],
	I1014 19:40:33.032538  437269 command_runner.go:130] >       "size":  "109379124",
	I1014 19:40:33.032542  437269 command_runner.go:130] >       "username":  "",
	I1014 19:40:33.032548  437269 command_runner.go:130] >       "pinned":  false
	I1014 19:40:33.032551  437269 command_runner.go:130] >     },
	I1014 19:40:33.032555  437269 command_runner.go:130] >     {
	I1014 19:40:33.032561  437269 command_runner.go:130] >       "id":  "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1014 19:40:33.032567  437269 command_runner.go:130] >       "repoTags":  [
	I1014 19:40:33.032572  437269 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1014 19:40:33.032575  437269 command_runner.go:130] >       ],
	I1014 19:40:33.032582  437269 command_runner.go:130] >       "repoDigests":  [
	I1014 19:40:33.032591  437269 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1014 19:40:33.032602  437269 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1014 19:40:33.032608  437269 command_runner.go:130] >       ],
	I1014 19:40:33.032612  437269 command_runner.go:130] >       "size":  "31470524",
	I1014 19:40:33.032616  437269 command_runner.go:130] >       "username":  "",
	I1014 19:40:33.032621  437269 command_runner.go:130] >       "pinned":  false
	I1014 19:40:33.032626  437269 command_runner.go:130] >     },
	I1014 19:40:33.032629  437269 command_runner.go:130] >     {
	I1014 19:40:33.032635  437269 command_runner.go:130] >       "id":  "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969",
	I1014 19:40:33.032642  437269 command_runner.go:130] >       "repoTags":  [
	I1014 19:40:33.032647  437269 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.12.1"
	I1014 19:40:33.032652  437269 command_runner.go:130] >       ],
	I1014 19:40:33.032656  437269 command_runner.go:130] >       "repoDigests":  [
	I1014 19:40:33.032665  437269 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998",
	I1014 19:40:33.032675  437269 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"
	I1014 19:40:33.032682  437269 command_runner.go:130] >       ],
	I1014 19:40:33.032686  437269 command_runner.go:130] >       "size":  "76103547",
	I1014 19:40:33.032690  437269 command_runner.go:130] >       "username":  "nonroot",
	I1014 19:40:33.032694  437269 command_runner.go:130] >       "pinned":  false
	I1014 19:40:33.032697  437269 command_runner.go:130] >     },
	I1014 19:40:33.032700  437269 command_runner.go:130] >     {
	I1014 19:40:33.032705  437269 command_runner.go:130] >       "id":  "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115",
	I1014 19:40:33.032709  437269 command_runner.go:130] >       "repoTags":  [
	I1014 19:40:33.032714  437269 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.4-0"
	I1014 19:40:33.032720  437269 command_runner.go:130] >       ],
	I1014 19:40:33.032724  437269 command_runner.go:130] >       "repoDigests":  [
	I1014 19:40:33.032730  437269 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f",
	I1014 19:40:33.032739  437269 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"
	I1014 19:40:33.032743  437269 command_runner.go:130] >       ],
	I1014 19:40:33.032749  437269 command_runner.go:130] >       "size":  "195976448",
	I1014 19:40:33.032772  437269 command_runner.go:130] >       "uid":  {
	I1014 19:40:33.032781  437269 command_runner.go:130] >         "value":  "0"
	I1014 19:40:33.032786  437269 command_runner.go:130] >       },
	I1014 19:40:33.032793  437269 command_runner.go:130] >       "username":  "",
	I1014 19:40:33.032798  437269 command_runner.go:130] >       "pinned":  false
	I1014 19:40:33.032801  437269 command_runner.go:130] >     },
	I1014 19:40:33.032804  437269 command_runner.go:130] >     {
	I1014 19:40:33.032810  437269 command_runner.go:130] >       "id":  "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97",
	I1014 19:40:33.032816  437269 command_runner.go:130] >       "repoTags":  [
	I1014 19:40:33.032821  437269 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.34.1"
	I1014 19:40:33.032827  437269 command_runner.go:130] >       ],
	I1014 19:40:33.032830  437269 command_runner.go:130] >       "repoDigests":  [
	I1014 19:40:33.032837  437269 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964",
	I1014 19:40:33.032847  437269 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"
	I1014 19:40:33.032850  437269 command_runner.go:130] >       ],
	I1014 19:40:33.032858  437269 command_runner.go:130] >       "size":  "89046001",
	I1014 19:40:33.032862  437269 command_runner.go:130] >       "uid":  {
	I1014 19:40:33.032866  437269 command_runner.go:130] >         "value":  "0"
	I1014 19:40:33.032869  437269 command_runner.go:130] >       },
	I1014 19:40:33.032873  437269 command_runner.go:130] >       "username":  "",
	I1014 19:40:33.032877  437269 command_runner.go:130] >       "pinned":  false
	I1014 19:40:33.032880  437269 command_runner.go:130] >     },
	I1014 19:40:33.032883  437269 command_runner.go:130] >     {
	I1014 19:40:33.032889  437269 command_runner.go:130] >       "id":  "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f",
	I1014 19:40:33.032895  437269 command_runner.go:130] >       "repoTags":  [
	I1014 19:40:33.032901  437269 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.34.1"
	I1014 19:40:33.032906  437269 command_runner.go:130] >       ],
	I1014 19:40:33.032910  437269 command_runner.go:130] >       "repoDigests":  [
	I1014 19:40:33.032917  437269 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89",
	I1014 19:40:33.032935  437269 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"
	I1014 19:40:33.032940  437269 command_runner.go:130] >       ],
	I1014 19:40:33.032944  437269 command_runner.go:130] >       "size":  "76004181",
	I1014 19:40:33.032948  437269 command_runner.go:130] >       "uid":  {
	I1014 19:40:33.032955  437269 command_runner.go:130] >         "value":  "0"
	I1014 19:40:33.032958  437269 command_runner.go:130] >       },
	I1014 19:40:33.032963  437269 command_runner.go:130] >       "username":  "",
	I1014 19:40:33.032969  437269 command_runner.go:130] >       "pinned":  false
	I1014 19:40:33.032973  437269 command_runner.go:130] >     },
	I1014 19:40:33.032976  437269 command_runner.go:130] >     {
	I1014 19:40:33.032981  437269 command_runner.go:130] >       "id":  "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7",
	I1014 19:40:33.032986  437269 command_runner.go:130] >       "repoTags":  [
	I1014 19:40:33.032990  437269 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.34.1"
	I1014 19:40:33.032996  437269 command_runner.go:130] >       ],
	I1014 19:40:33.033000  437269 command_runner.go:130] >       "repoDigests":  [
	I1014 19:40:33.033009  437269 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a",
	I1014 19:40:33.033018  437269 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"
	I1014 19:40:33.033023  437269 command_runner.go:130] >       ],
	I1014 19:40:33.033027  437269 command_runner.go:130] >       "size":  "73138073",
	I1014 19:40:33.033033  437269 command_runner.go:130] >       "username":  "",
	I1014 19:40:33.033037  437269 command_runner.go:130] >       "pinned":  false
	I1014 19:40:33.033042  437269 command_runner.go:130] >     },
	I1014 19:40:33.033045  437269 command_runner.go:130] >     {
	I1014 19:40:33.033051  437269 command_runner.go:130] >       "id":  "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813",
	I1014 19:40:33.033055  437269 command_runner.go:130] >       "repoTags":  [
	I1014 19:40:33.033059  437269 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.34.1"
	I1014 19:40:33.033062  437269 command_runner.go:130] >       ],
	I1014 19:40:33.033066  437269 command_runner.go:130] >       "repoDigests":  [
	I1014 19:40:33.033073  437269 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31",
	I1014 19:40:33.033115  437269 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"
	I1014 19:40:33.033125  437269 command_runner.go:130] >       ],
	I1014 19:40:33.033129  437269 command_runner.go:130] >       "size":  "53844823",
	I1014 19:40:33.033133  437269 command_runner.go:130] >       "uid":  {
	I1014 19:40:33.033139  437269 command_runner.go:130] >         "value":  "0"
	I1014 19:40:33.033142  437269 command_runner.go:130] >       },
	I1014 19:40:33.033146  437269 command_runner.go:130] >       "username":  "",
	I1014 19:40:33.033150  437269 command_runner.go:130] >       "pinned":  false
	I1014 19:40:33.033153  437269 command_runner.go:130] >     },
	I1014 19:40:33.033157  437269 command_runner.go:130] >     {
	I1014 19:40:33.033166  437269 command_runner.go:130] >       "id":  "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f",
	I1014 19:40:33.033170  437269 command_runner.go:130] >       "repoTags":  [
	I1014 19:40:33.033175  437269 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1014 19:40:33.033180  437269 command_runner.go:130] >       ],
	I1014 19:40:33.033184  437269 command_runner.go:130] >       "repoDigests":  [
	I1014 19:40:33.033194  437269 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1014 19:40:33.033201  437269 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"
	I1014 19:40:33.033207  437269 command_runner.go:130] >       ],
	I1014 19:40:33.033210  437269 command_runner.go:130] >       "size":  "742092",
	I1014 19:40:33.033214  437269 command_runner.go:130] >       "uid":  {
	I1014 19:40:33.033217  437269 command_runner.go:130] >         "value":  "65535"
	I1014 19:40:33.033221  437269 command_runner.go:130] >       },
	I1014 19:40:33.033227  437269 command_runner.go:130] >       "username":  "",
	I1014 19:40:33.033231  437269 command_runner.go:130] >       "pinned":  true
	I1014 19:40:33.033234  437269 command_runner.go:130] >     }
	I1014 19:40:33.033237  437269 command_runner.go:130] >   ]
	I1014 19:40:33.033243  437269 command_runner.go:130] > }
	I1014 19:40:33.033339  437269 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 19:40:33.033350  437269 cache_images.go:85] Images are preloaded, skipping loading
	I1014 19:40:33.033357  437269 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1014 19:40:33.033466  437269 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-744288 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-744288 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 19:40:33.033525  437269 ssh_runner.go:195] Run: crio config
	I1014 19:40:33.060289  437269 command_runner.go:130] ! time="2025-10-14T19:40:33.059904069Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1014 19:40:33.060322  437269 command_runner.go:130] ! time="2025-10-14T19:40:33.059934761Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1014 19:40:33.060333  437269 command_runner.go:130] ! time="2025-10-14T19:40:33.05995717Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1014 19:40:33.060344  437269 command_runner.go:130] ! time="2025-10-14T19:40:33.059977069Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1014 19:40:33.060356  437269 command_runner.go:130] ! time="2025-10-14T19:40:33.060036887Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1014 19:40:33.060415  437269 command_runner.go:130] ! time="2025-10-14T19:40:33.060204237Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1014 19:40:33.072518  437269 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1014 19:40:33.078451  437269 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1014 19:40:33.078471  437269 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1014 19:40:33.078478  437269 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1014 19:40:33.078485  437269 command_runner.go:130] > #
	I1014 19:40:33.078491  437269 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1014 19:40:33.078497  437269 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1014 19:40:33.078504  437269 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1014 19:40:33.078513  437269 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1014 19:40:33.078518  437269 command_runner.go:130] > # reload'.
	I1014 19:40:33.078524  437269 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1014 19:40:33.078533  437269 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1014 19:40:33.078539  437269 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1014 19:40:33.078545  437269 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1014 19:40:33.078551  437269 command_runner.go:130] > [crio]
	I1014 19:40:33.078557  437269 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1014 19:40:33.078564  437269 command_runner.go:130] > # containers images, in this directory.
	I1014 19:40:33.078572  437269 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1014 19:40:33.078580  437269 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1014 19:40:33.078585  437269 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1014 19:40:33.078594  437269 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1014 19:40:33.078601  437269 command_runner.go:130] > # imagestore = ""
	I1014 19:40:33.078607  437269 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1014 19:40:33.078615  437269 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1014 19:40:33.078620  437269 command_runner.go:130] > # storage_driver = "overlay"
	I1014 19:40:33.078625  437269 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1014 19:40:33.078633  437269 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1014 19:40:33.078637  437269 command_runner.go:130] > # storage_option = [
	I1014 19:40:33.078642  437269 command_runner.go:130] > # ]
	I1014 19:40:33.078648  437269 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1014 19:40:33.078656  437269 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1014 19:40:33.078660  437269 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1014 19:40:33.078667  437269 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1014 19:40:33.078673  437269 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1014 19:40:33.078690  437269 command_runner.go:130] > # always happen on a node reboot
	I1014 19:40:33.078695  437269 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1014 19:40:33.078703  437269 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1014 19:40:33.078709  437269 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1014 19:40:33.078716  437269 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1014 19:40:33.078720  437269 command_runner.go:130] > # version_file_persist = ""
	I1014 19:40:33.078729  437269 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1014 19:40:33.078739  437269 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1014 19:40:33.078745  437269 command_runner.go:130] > # internal_wipe = true
	I1014 19:40:33.078771  437269 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1014 19:40:33.078784  437269 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1014 19:40:33.078790  437269 command_runner.go:130] > # internal_repair = true
	I1014 19:40:33.078798  437269 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1014 19:40:33.078804  437269 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1014 19:40:33.078816  437269 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1014 19:40:33.078823  437269 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1014 19:40:33.078829  437269 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1014 19:40:33.078834  437269 command_runner.go:130] > [crio.api]
	I1014 19:40:33.078839  437269 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1014 19:40:33.078846  437269 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1014 19:40:33.078851  437269 command_runner.go:130] > # IP address on which the stream server will listen.
	I1014 19:40:33.078858  437269 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1014 19:40:33.078864  437269 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1014 19:40:33.078871  437269 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1014 19:40:33.078875  437269 command_runner.go:130] > # stream_port = "0"
	I1014 19:40:33.078881  437269 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1014 19:40:33.078885  437269 command_runner.go:130] > # stream_enable_tls = false
	I1014 19:40:33.078893  437269 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1014 19:40:33.078897  437269 command_runner.go:130] > # stream_idle_timeout = ""
	I1014 19:40:33.078904  437269 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1014 19:40:33.078912  437269 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1014 19:40:33.078916  437269 command_runner.go:130] > # stream_tls_cert = ""
	I1014 19:40:33.078924  437269 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1014 19:40:33.078931  437269 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1014 19:40:33.078936  437269 command_runner.go:130] > # stream_tls_key = ""
	I1014 19:40:33.078941  437269 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1014 19:40:33.078949  437269 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1014 19:40:33.078954  437269 command_runner.go:130] > # automatically pick up the changes.
	I1014 19:40:33.078960  437269 command_runner.go:130] > # stream_tls_ca = ""
	I1014 19:40:33.078977  437269 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1014 19:40:33.078984  437269 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1014 19:40:33.078991  437269 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1014 19:40:33.078998  437269 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1014 19:40:33.079004  437269 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1014 19:40:33.079011  437269 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1014 19:40:33.079015  437269 command_runner.go:130] > [crio.runtime]
	I1014 19:40:33.079021  437269 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1014 19:40:33.079028  437269 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1014 19:40:33.079032  437269 command_runner.go:130] > # "nofile=1024:2048"
	I1014 19:40:33.079040  437269 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1014 19:40:33.079046  437269 command_runner.go:130] > # default_ulimits = [
	I1014 19:40:33.079049  437269 command_runner.go:130] > # ]
	I1014 19:40:33.079054  437269 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1014 19:40:33.079060  437269 command_runner.go:130] > # no_pivot = false
	I1014 19:40:33.079065  437269 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1014 19:40:33.079073  437269 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1014 19:40:33.079078  437269 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1014 19:40:33.079086  437269 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1014 19:40:33.079090  437269 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1014 19:40:33.079099  437269 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1014 19:40:33.079105  437269 command_runner.go:130] > # conmon = ""
	I1014 19:40:33.079109  437269 command_runner.go:130] > # Cgroup setting for conmon
	I1014 19:40:33.079117  437269 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1014 19:40:33.079123  437269 command_runner.go:130] > conmon_cgroup = "pod"
	I1014 19:40:33.079129  437269 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1014 19:40:33.079136  437269 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1014 19:40:33.079142  437269 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1014 19:40:33.079147  437269 command_runner.go:130] > # conmon_env = [
	I1014 19:40:33.079150  437269 command_runner.go:130] > # ]
	I1014 19:40:33.079155  437269 command_runner.go:130] > # Additional environment variables to set for all the
	I1014 19:40:33.079163  437269 command_runner.go:130] > # containers. These are overridden if set in the
	I1014 19:40:33.079169  437269 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1014 19:40:33.079175  437269 command_runner.go:130] > # default_env = [
	I1014 19:40:33.079177  437269 command_runner.go:130] > # ]
	I1014 19:40:33.079183  437269 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1014 19:40:33.079192  437269 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1014 19:40:33.079198  437269 command_runner.go:130] > # selinux = false
	I1014 19:40:33.079204  437269 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1014 19:40:33.079210  437269 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1014 19:40:33.079219  437269 command_runner.go:130] > # This option supports live configuration reload.
	I1014 19:40:33.079225  437269 command_runner.go:130] > # seccomp_profile = ""
	I1014 19:40:33.079231  437269 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1014 19:40:33.079237  437269 command_runner.go:130] > # This option supports live configuration reload.
	I1014 19:40:33.079242  437269 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1014 19:40:33.079250  437269 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1014 19:40:33.079258  437269 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1014 19:40:33.079264  437269 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1014 19:40:33.079273  437269 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1014 19:40:33.079279  437269 command_runner.go:130] > # This option supports live configuration reload.
	I1014 19:40:33.079284  437269 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1014 19:40:33.079291  437269 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1014 19:40:33.079295  437269 command_runner.go:130] > # the cgroup blockio controller.
	I1014 19:40:33.079301  437269 command_runner.go:130] > # blockio_config_file = ""
	I1014 19:40:33.079308  437269 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1014 19:40:33.079314  437269 command_runner.go:130] > # blockio parameters.
	I1014 19:40:33.079317  437269 command_runner.go:130] > # blockio_reload = false
	I1014 19:40:33.079325  437269 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1014 19:40:33.079329  437269 command_runner.go:130] > # irqbalance daemon.
	I1014 19:40:33.079336  437269 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1014 19:40:33.079342  437269 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1014 19:40:33.079351  437269 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1014 19:40:33.079360  437269 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1014 19:40:33.079367  437269 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1014 19:40:33.079374  437269 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1014 19:40:33.079380  437269 command_runner.go:130] > # This option supports live configuration reload.
	I1014 19:40:33.079385  437269 command_runner.go:130] > # rdt_config_file = ""
	I1014 19:40:33.079393  437269 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1014 19:40:33.079396  437269 command_runner.go:130] > # cgroup_manager = "systemd"
	I1014 19:40:33.079402  437269 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1014 19:40:33.079407  437269 command_runner.go:130] > # separate_pull_cgroup = ""
	I1014 19:40:33.079413  437269 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1014 19:40:33.079421  437269 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1014 19:40:33.079427  437269 command_runner.go:130] > # will be added.
	I1014 19:40:33.079430  437269 command_runner.go:130] > # default_capabilities = [
	I1014 19:40:33.079433  437269 command_runner.go:130] > # 	"CHOWN",
	I1014 19:40:33.079439  437269 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1014 19:40:33.079442  437269 command_runner.go:130] > # 	"FSETID",
	I1014 19:40:33.079445  437269 command_runner.go:130] > # 	"FOWNER",
	I1014 19:40:33.079451  437269 command_runner.go:130] > # 	"SETGID",
	I1014 19:40:33.079466  437269 command_runner.go:130] > # 	"SETUID",
	I1014 19:40:33.079472  437269 command_runner.go:130] > # 	"SETPCAP",
	I1014 19:40:33.079475  437269 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1014 19:40:33.079480  437269 command_runner.go:130] > # 	"KILL",
	I1014 19:40:33.079484  437269 command_runner.go:130] > # ]
	I1014 19:40:33.079493  437269 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1014 19:40:33.079501  437269 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1014 19:40:33.079508  437269 command_runner.go:130] > # add_inheritable_capabilities = false
	I1014 19:40:33.079514  437269 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1014 19:40:33.079522  437269 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1014 19:40:33.079526  437269 command_runner.go:130] > default_sysctls = [
	I1014 19:40:33.079530  437269 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1014 19:40:33.079536  437269 command_runner.go:130] > ]
	I1014 19:40:33.079540  437269 command_runner.go:130] > # List of devices on the host that a
	I1014 19:40:33.079548  437269 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1014 19:40:33.079553  437269 command_runner.go:130] > # allowed_devices = [
	I1014 19:40:33.079557  437269 command_runner.go:130] > # 	"/dev/fuse",
	I1014 19:40:33.079563  437269 command_runner.go:130] > # 	"/dev/net/tun",
	I1014 19:40:33.079566  437269 command_runner.go:130] > # ]
	I1014 19:40:33.079574  437269 command_runner.go:130] > # List of additional devices. specified as
	I1014 19:40:33.079581  437269 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1014 19:40:33.079588  437269 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1014 19:40:33.079595  437269 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1014 19:40:33.079601  437269 command_runner.go:130] > # additional_devices = [
	I1014 19:40:33.079604  437269 command_runner.go:130] > # ]
	I1014 19:40:33.079611  437269 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1014 19:40:33.079615  437269 command_runner.go:130] > # cdi_spec_dirs = [
	I1014 19:40:33.079619  437269 command_runner.go:130] > # 	"/etc/cdi",
	I1014 19:40:33.079625  437269 command_runner.go:130] > # 	"/var/run/cdi",
	I1014 19:40:33.079628  437269 command_runner.go:130] > # ]
	I1014 19:40:33.079633  437269 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1014 19:40:33.079641  437269 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1014 19:40:33.079645  437269 command_runner.go:130] > # Defaults to false.
	I1014 19:40:33.079652  437269 command_runner.go:130] > # device_ownership_from_security_context = false
	I1014 19:40:33.079659  437269 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1014 19:40:33.079666  437269 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1014 19:40:33.079670  437269 command_runner.go:130] > # hooks_dir = [
	I1014 19:40:33.079682  437269 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1014 19:40:33.079687  437269 command_runner.go:130] > # ]
	I1014 19:40:33.079693  437269 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1014 19:40:33.079701  437269 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1014 19:40:33.079706  437269 command_runner.go:130] > # its default mounts from the following two files:
	I1014 19:40:33.079712  437269 command_runner.go:130] > #
	I1014 19:40:33.079718  437269 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1014 19:40:33.079726  437269 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1014 19:40:33.079734  437269 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1014 19:40:33.079737  437269 command_runner.go:130] > #
	I1014 19:40:33.079743  437269 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1014 19:40:33.079751  437269 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1014 19:40:33.079780  437269 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1014 19:40:33.079788  437269 command_runner.go:130] > #      only add mounts it finds in this file.
	I1014 19:40:33.079791  437269 command_runner.go:130] > #
	I1014 19:40:33.079797  437269 command_runner.go:130] > # default_mounts_file = ""
	I1014 19:40:33.079804  437269 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1014 19:40:33.079811  437269 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1014 19:40:33.079816  437269 command_runner.go:130] > # pids_limit = -1
	I1014 19:40:33.079822  437269 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1014 19:40:33.079830  437269 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1014 19:40:33.079839  437269 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1014 19:40:33.079846  437269 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1014 19:40:33.079852  437269 command_runner.go:130] > # log_size_max = -1
	I1014 19:40:33.079858  437269 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1014 19:40:33.079864  437269 command_runner.go:130] > # log_to_journald = false
	I1014 19:40:33.079870  437269 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1014 19:40:33.079878  437269 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1014 19:40:33.079883  437269 command_runner.go:130] > # Path to directory for container attach sockets.
	I1014 19:40:33.079890  437269 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1014 19:40:33.079895  437269 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1014 19:40:33.079901  437269 command_runner.go:130] > # bind_mount_prefix = ""
	I1014 19:40:33.079906  437269 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1014 19:40:33.079912  437269 command_runner.go:130] > # read_only = false
	I1014 19:40:33.079917  437269 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1014 19:40:33.079926  437269 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1014 19:40:33.079933  437269 command_runner.go:130] > # live configuration reload.
	I1014 19:40:33.079937  437269 command_runner.go:130] > # log_level = "info"
	I1014 19:40:33.079942  437269 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1014 19:40:33.079950  437269 command_runner.go:130] > # This option supports live configuration reload.
	I1014 19:40:33.079953  437269 command_runner.go:130] > # log_filter = ""
	I1014 19:40:33.079959  437269 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1014 19:40:33.079967  437269 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1014 19:40:33.079970  437269 command_runner.go:130] > # separated by comma.
	I1014 19:40:33.079978  437269 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1014 19:40:33.079983  437269 command_runner.go:130] > # uid_mappings = ""
	I1014 19:40:33.079989  437269 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1014 19:40:33.079997  437269 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1014 19:40:33.080005  437269 command_runner.go:130] > # separated by comma.
	I1014 19:40:33.080014  437269 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1014 19:40:33.080020  437269 command_runner.go:130] > # gid_mappings = ""
	I1014 19:40:33.080026  437269 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1014 19:40:33.080035  437269 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1014 19:40:33.080043  437269 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1014 19:40:33.080049  437269 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1014 19:40:33.080055  437269 command_runner.go:130] > # minimum_mappable_uid = -1
	I1014 19:40:33.080061  437269 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1014 19:40:33.080069  437269 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1014 19:40:33.080075  437269 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1014 19:40:33.080085  437269 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1014 19:40:33.080090  437269 command_runner.go:130] > # minimum_mappable_gid = -1
	I1014 19:40:33.080096  437269 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1014 19:40:33.080112  437269 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1014 19:40:33.080120  437269 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1014 19:40:33.080124  437269 command_runner.go:130] > # ctr_stop_timeout = 30
	I1014 19:40:33.080131  437269 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1014 19:40:33.080138  437269 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1014 19:40:33.080144  437269 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1014 19:40:33.080149  437269 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1014 19:40:33.080155  437269 command_runner.go:130] > # drop_infra_ctr = true
	I1014 19:40:33.080160  437269 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1014 19:40:33.080168  437269 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1014 19:40:33.080175  437269 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1014 19:40:33.080181  437269 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1014 19:40:33.080188  437269 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1014 19:40:33.080195  437269 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1014 19:40:33.080200  437269 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1014 19:40:33.080207  437269 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1014 19:40:33.080211  437269 command_runner.go:130] > # shared_cpuset = ""
	I1014 19:40:33.080219  437269 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1014 19:40:33.080223  437269 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1014 19:40:33.080230  437269 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1014 19:40:33.080237  437269 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1014 19:40:33.080243  437269 command_runner.go:130] > # pinns_path = ""
	I1014 19:40:33.080249  437269 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1014 19:40:33.080256  437269 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1014 19:40:33.080261  437269 command_runner.go:130] > # enable_criu_support = true
	I1014 19:40:33.080268  437269 command_runner.go:130] > # Enable/disable the generation of the container,
	I1014 19:40:33.080273  437269 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1014 19:40:33.080280  437269 command_runner.go:130] > # enable_pod_events = false
	I1014 19:40:33.080285  437269 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1014 19:40:33.080292  437269 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1014 19:40:33.080296  437269 command_runner.go:130] > # default_runtime = "crun"
	I1014 19:40:33.080301  437269 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1014 19:40:33.080310  437269 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1014 19:40:33.080320  437269 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1014 19:40:33.080325  437269 command_runner.go:130] > # creation as a file is not desired either.
	I1014 19:40:33.080336  437269 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1014 19:40:33.080342  437269 command_runner.go:130] > # the hostname is being managed dynamically.
	I1014 19:40:33.080346  437269 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1014 19:40:33.080352  437269 command_runner.go:130] > # ]
	I1014 19:40:33.080357  437269 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1014 19:40:33.080365  437269 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1014 19:40:33.080373  437269 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1014 19:40:33.080378  437269 command_runner.go:130] > # Each entry in the table should follow the format:
	I1014 19:40:33.080382  437269 command_runner.go:130] > #
	I1014 19:40:33.080387  437269 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1014 19:40:33.080394  437269 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1014 19:40:33.080397  437269 command_runner.go:130] > # runtime_type = "oci"
	I1014 19:40:33.080404  437269 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1014 19:40:33.080408  437269 command_runner.go:130] > # inherit_default_runtime = false
	I1014 19:40:33.080413  437269 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1014 19:40:33.080419  437269 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1014 19:40:33.080424  437269 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1014 19:40:33.080430  437269 command_runner.go:130] > # monitor_env = []
	I1014 19:40:33.080435  437269 command_runner.go:130] > # privileged_without_host_devices = false
	I1014 19:40:33.080440  437269 command_runner.go:130] > # allowed_annotations = []
	I1014 19:40:33.080445  437269 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1014 19:40:33.080451  437269 command_runner.go:130] > # no_sync_log = false
	I1014 19:40:33.080455  437269 command_runner.go:130] > # default_annotations = {}
	I1014 19:40:33.080461  437269 command_runner.go:130] > # stream_websockets = false
	I1014 19:40:33.080465  437269 command_runner.go:130] > # seccomp_profile = ""
	I1014 19:40:33.080487  437269 command_runner.go:130] > # Where:
	I1014 19:40:33.080494  437269 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1014 19:40:33.080500  437269 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1014 19:40:33.080508  437269 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1014 19:40:33.080514  437269 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1014 19:40:33.080519  437269 command_runner.go:130] > #   in $PATH.
	I1014 19:40:33.080525  437269 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1014 19:40:33.080532  437269 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1014 19:40:33.080538  437269 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1014 19:40:33.080543  437269 command_runner.go:130] > #   state.
	I1014 19:40:33.080552  437269 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1014 19:40:33.080560  437269 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1014 19:40:33.080565  437269 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1014 19:40:33.080573  437269 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1014 19:40:33.080578  437269 command_runner.go:130] > #   the values from the default runtime on load time.
	I1014 19:40:33.080586  437269 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1014 19:40:33.080591  437269 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1014 19:40:33.080599  437269 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1014 19:40:33.080605  437269 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1014 19:40:33.080612  437269 command_runner.go:130] > #   The currently recognized values are:
	I1014 19:40:33.080618  437269 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1014 19:40:33.080627  437269 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1014 19:40:33.080636  437269 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1014 19:40:33.080641  437269 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1014 19:40:33.080651  437269 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1014 19:40:33.080660  437269 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1014 19:40:33.080669  437269 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1014 19:40:33.080680  437269 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1014 19:40:33.080687  437269 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1014 19:40:33.080693  437269 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1014 19:40:33.080702  437269 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1014 19:40:33.080710  437269 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1014 19:40:33.080715  437269 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1014 19:40:33.080724  437269 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1014 19:40:33.080732  437269 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1014 19:40:33.080738  437269 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1014 19:40:33.080747  437269 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1014 19:40:33.080751  437269 command_runner.go:130] > #   deprecated option "conmon".
	I1014 19:40:33.080773  437269 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1014 19:40:33.080783  437269 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1014 19:40:33.080796  437269 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1014 19:40:33.080803  437269 command_runner.go:130] > #   should be moved to the container's cgroup
	I1014 19:40:33.080810  437269 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1014 19:40:33.080817  437269 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1014 19:40:33.080824  437269 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1014 19:40:33.080830  437269 command_runner.go:130] > #   conmon-rs by using:
	I1014 19:40:33.080837  437269 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1014 19:40:33.080847  437269 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1014 19:40:33.080857  437269 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1014 19:40:33.080865  437269 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1014 19:40:33.080872  437269 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1014 19:40:33.080879  437269 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1014 19:40:33.080888  437269 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1014 19:40:33.080894  437269 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1014 19:40:33.080904  437269 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1014 19:40:33.080915  437269 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1014 19:40:33.080921  437269 command_runner.go:130] > #   when a machine crash happens.
	I1014 19:40:33.080929  437269 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1014 19:40:33.080939  437269 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1014 19:40:33.080949  437269 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1014 19:40:33.080955  437269 command_runner.go:130] > #   seccomp profile for the runtime.
	I1014 19:40:33.080961  437269 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1014 19:40:33.080970  437269 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1014 19:40:33.080975  437269 command_runner.go:130] > #
	I1014 19:40:33.080980  437269 command_runner.go:130] > # Using the seccomp notifier feature:
	I1014 19:40:33.080985  437269 command_runner.go:130] > #
	I1014 19:40:33.080991  437269 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1014 19:40:33.080998  437269 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1014 19:40:33.081002  437269 command_runner.go:130] > #
	I1014 19:40:33.081007  437269 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1014 19:40:33.081015  437269 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1014 19:40:33.081020  437269 command_runner.go:130] > #
	I1014 19:40:33.081026  437269 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1014 19:40:33.081032  437269 command_runner.go:130] > # feature.
	I1014 19:40:33.081035  437269 command_runner.go:130] > #
	I1014 19:40:33.081042  437269 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1014 19:40:33.081048  437269 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1014 19:40:33.081057  437269 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1014 19:40:33.081062  437269 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1014 19:40:33.081070  437269 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1014 19:40:33.081073  437269 command_runner.go:130] > #
	I1014 19:40:33.081079  437269 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1014 19:40:33.081087  437269 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1014 19:40:33.081090  437269 command_runner.go:130] > #
	I1014 19:40:33.081096  437269 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1014 19:40:33.081103  437269 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1014 19:40:33.081106  437269 command_runner.go:130] > #
	I1014 19:40:33.081112  437269 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1014 19:40:33.081119  437269 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1014 19:40:33.081122  437269 command_runner.go:130] > # limitation.
	I1014 19:40:33.081129  437269 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1014 19:40:33.081138  437269 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1014 19:40:33.081143  437269 command_runner.go:130] > runtime_type = ""
	I1014 19:40:33.081147  437269 command_runner.go:130] > runtime_root = "/run/crun"
	I1014 19:40:33.081151  437269 command_runner.go:130] > inherit_default_runtime = false
	I1014 19:40:33.081157  437269 command_runner.go:130] > runtime_config_path = ""
	I1014 19:40:33.081161  437269 command_runner.go:130] > container_min_memory = ""
	I1014 19:40:33.081167  437269 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1014 19:40:33.081171  437269 command_runner.go:130] > monitor_cgroup = "pod"
	I1014 19:40:33.081177  437269 command_runner.go:130] > monitor_exec_cgroup = ""
	I1014 19:40:33.081181  437269 command_runner.go:130] > allowed_annotations = [
	I1014 19:40:33.081187  437269 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1014 19:40:33.081190  437269 command_runner.go:130] > ]
	I1014 19:40:33.081197  437269 command_runner.go:130] > privileged_without_host_devices = false
	I1014 19:40:33.081201  437269 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1014 19:40:33.081208  437269 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1014 19:40:33.081212  437269 command_runner.go:130] > runtime_type = ""
	I1014 19:40:33.081218  437269 command_runner.go:130] > runtime_root = "/run/runc"
	I1014 19:40:33.081222  437269 command_runner.go:130] > inherit_default_runtime = false
	I1014 19:40:33.081229  437269 command_runner.go:130] > runtime_config_path = ""
	I1014 19:40:33.081234  437269 command_runner.go:130] > container_min_memory = ""
	I1014 19:40:33.081241  437269 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1014 19:40:33.081245  437269 command_runner.go:130] > monitor_cgroup = "pod"
	I1014 19:40:33.081251  437269 command_runner.go:130] > monitor_exec_cgroup = ""
	I1014 19:40:33.081256  437269 command_runner.go:130] > privileged_without_host_devices = false
	I1014 19:40:33.081264  437269 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1014 19:40:33.081271  437269 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1014 19:40:33.081277  437269 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1014 19:40:33.081286  437269 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1014 19:40:33.081298  437269 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1014 19:40:33.081309  437269 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1014 19:40:33.081318  437269 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1014 19:40:33.081324  437269 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1014 19:40:33.081335  437269 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1014 19:40:33.081345  437269 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1014 19:40:33.081353  437269 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1014 19:40:33.081359  437269 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1014 19:40:33.081365  437269 command_runner.go:130] > # Example:
	I1014 19:40:33.081369  437269 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1014 19:40:33.081375  437269 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1014 19:40:33.081380  437269 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1014 19:40:33.081389  437269 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1014 19:40:33.081395  437269 command_runner.go:130] > # cpuset = "0-1"
	I1014 19:40:33.081399  437269 command_runner.go:130] > # cpushares = "5"
	I1014 19:40:33.081405  437269 command_runner.go:130] > # cpuquota = "1000"
	I1014 19:40:33.081408  437269 command_runner.go:130] > # cpuperiod = "100000"
	I1014 19:40:33.081412  437269 command_runner.go:130] > # cpulimit = "35"
	I1014 19:40:33.081417  437269 command_runner.go:130] > # Where:
	I1014 19:40:33.081421  437269 command_runner.go:130] > # The workload name is workload-type.
	I1014 19:40:33.081430  437269 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1014 19:40:33.081438  437269 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1014 19:40:33.081443  437269 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1014 19:40:33.081453  437269 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1014 19:40:33.081470  437269 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1014 19:40:33.081477  437269 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1014 19:40:33.081484  437269 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1014 19:40:33.081490  437269 command_runner.go:130] > # Default value is set to true
	I1014 19:40:33.081494  437269 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1014 19:40:33.081499  437269 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1014 19:40:33.081505  437269 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1014 19:40:33.081510  437269 command_runner.go:130] > # Default value is set to 'false'
	I1014 19:40:33.081516  437269 command_runner.go:130] > # disable_hostport_mapping = false
	I1014 19:40:33.081522  437269 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1014 19:40:33.081531  437269 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1014 19:40:33.081537  437269 command_runner.go:130] > # timezone = ""
	I1014 19:40:33.081543  437269 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1014 19:40:33.081549  437269 command_runner.go:130] > #
	I1014 19:40:33.081555  437269 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1014 19:40:33.081563  437269 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1014 19:40:33.081567  437269 command_runner.go:130] > [crio.image]
	I1014 19:40:33.081575  437269 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1014 19:40:33.081579  437269 command_runner.go:130] > # default_transport = "docker://"
	I1014 19:40:33.081585  437269 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1014 19:40:33.081593  437269 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1014 19:40:33.081597  437269 command_runner.go:130] > # global_auth_file = ""
	I1014 19:40:33.081604  437269 command_runner.go:130] > # The image used to instantiate infra containers.
	I1014 19:40:33.081609  437269 command_runner.go:130] > # This option supports live configuration reload.
	I1014 19:40:33.081616  437269 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1014 19:40:33.081622  437269 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1014 19:40:33.081630  437269 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1014 19:40:33.081634  437269 command_runner.go:130] > # This option supports live configuration reload.
	I1014 19:40:33.081639  437269 command_runner.go:130] > # pause_image_auth_file = ""
	I1014 19:40:33.081645  437269 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1014 19:40:33.081653  437269 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1014 19:40:33.081658  437269 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1014 19:40:33.081666  437269 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1014 19:40:33.081671  437269 command_runner.go:130] > # pause_command = "/pause"
	I1014 19:40:33.081682  437269 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1014 19:40:33.081690  437269 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1014 19:40:33.081695  437269 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1014 19:40:33.081703  437269 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1014 19:40:33.081709  437269 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1014 19:40:33.081717  437269 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1014 19:40:33.081723  437269 command_runner.go:130] > # pinned_images = [
	I1014 19:40:33.081725  437269 command_runner.go:130] > # ]
	I1014 19:40:33.081731  437269 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1014 19:40:33.081739  437269 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1014 19:40:33.081745  437269 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1014 19:40:33.081762  437269 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1014 19:40:33.081774  437269 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1014 19:40:33.081781  437269 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1014 19:40:33.081789  437269 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1014 19:40:33.081795  437269 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1014 19:40:33.081804  437269 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1014 19:40:33.081813  437269 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1014 19:40:33.081822  437269 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1014 19:40:33.081833  437269 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1014 19:40:33.081841  437269 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1014 19:40:33.081847  437269 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1014 19:40:33.081853  437269 command_runner.go:130] > # changing them here.
	I1014 19:40:33.081859  437269 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1014 19:40:33.081865  437269 command_runner.go:130] > # insecure_registries = [
	I1014 19:40:33.081868  437269 command_runner.go:130] > # ]
	I1014 19:40:33.081877  437269 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1014 19:40:33.081887  437269 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1014 19:40:33.081893  437269 command_runner.go:130] > # image_volumes = "mkdir"
	I1014 19:40:33.081898  437269 command_runner.go:130] > # Temporary directory to use for storing big files
	I1014 19:40:33.081904  437269 command_runner.go:130] > # big_files_temporary_dir = ""
	I1014 19:40:33.081910  437269 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1014 19:40:33.081918  437269 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1014 19:40:33.081925  437269 command_runner.go:130] > # auto_reload_registries = false
	I1014 19:40:33.081932  437269 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1014 19:40:33.081940  437269 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1014 19:40:33.081947  437269 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1014 19:40:33.081951  437269 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1014 19:40:33.081958  437269 command_runner.go:130] > # The mode of short name resolution.
	I1014 19:40:33.081966  437269 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1014 19:40:33.081977  437269 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1014 19:40:33.081984  437269 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1014 19:40:33.081989  437269 command_runner.go:130] > # short_name_mode = "enforcing"
	I1014 19:40:33.081997  437269 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1014 19:40:33.082002  437269 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1014 19:40:33.082009  437269 command_runner.go:130] > # oci_artifact_mount_support = true
	I1014 19:40:33.082015  437269 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1014 19:40:33.082021  437269 command_runner.go:130] > # CNI plugins.
	I1014 19:40:33.082025  437269 command_runner.go:130] > [crio.network]
	I1014 19:40:33.082033  437269 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1014 19:40:33.082040  437269 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1014 19:40:33.082044  437269 command_runner.go:130] > # cni_default_network = ""
	I1014 19:40:33.082052  437269 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1014 19:40:33.082056  437269 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1014 19:40:33.082064  437269 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1014 19:40:33.082068  437269 command_runner.go:130] > # plugin_dirs = [
	I1014 19:40:33.082071  437269 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1014 19:40:33.082074  437269 command_runner.go:130] > # ]
	I1014 19:40:33.082078  437269 command_runner.go:130] > # List of included pod metrics.
	I1014 19:40:33.082082  437269 command_runner.go:130] > # included_pod_metrics = [
	I1014 19:40:33.082085  437269 command_runner.go:130] > # ]
	I1014 19:40:33.082089  437269 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1014 19:40:33.082092  437269 command_runner.go:130] > [crio.metrics]
	I1014 19:40:33.082097  437269 command_runner.go:130] > # Globally enable or disable metrics support.
	I1014 19:40:33.082100  437269 command_runner.go:130] > # enable_metrics = false
	I1014 19:40:33.082104  437269 command_runner.go:130] > # Specify enabled metrics collectors.
	I1014 19:40:33.082108  437269 command_runner.go:130] > # Per default all metrics are enabled.
	I1014 19:40:33.082114  437269 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1014 19:40:33.082119  437269 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1014 19:40:33.082124  437269 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1014 19:40:33.082128  437269 command_runner.go:130] > # metrics_collectors = [
	I1014 19:40:33.082131  437269 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1014 19:40:33.082135  437269 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1014 19:40:33.082139  437269 command_runner.go:130] > # 	"containers_oom_total",
	I1014 19:40:33.082142  437269 command_runner.go:130] > # 	"processes_defunct",
	I1014 19:40:33.082146  437269 command_runner.go:130] > # 	"operations_total",
	I1014 19:40:33.082150  437269 command_runner.go:130] > # 	"operations_latency_seconds",
	I1014 19:40:33.082154  437269 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1014 19:40:33.082157  437269 command_runner.go:130] > # 	"operations_errors_total",
	I1014 19:40:33.082162  437269 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1014 19:40:33.082169  437269 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1014 19:40:33.082173  437269 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1014 19:40:33.082178  437269 command_runner.go:130] > # 	"image_pulls_success_total",
	I1014 19:40:33.082182  437269 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1014 19:40:33.082188  437269 command_runner.go:130] > # 	"containers_oom_count_total",
	I1014 19:40:33.082193  437269 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1014 19:40:33.082199  437269 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1014 19:40:33.082203  437269 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1014 19:40:33.082208  437269 command_runner.go:130] > # ]
	I1014 19:40:33.082214  437269 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1014 19:40:33.082219  437269 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1014 19:40:33.082224  437269 command_runner.go:130] > # The port on which the metrics server will listen.
	I1014 19:40:33.082227  437269 command_runner.go:130] > # metrics_port = 9090
	I1014 19:40:33.082234  437269 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1014 19:40:33.082238  437269 command_runner.go:130] > # metrics_socket = ""
	I1014 19:40:33.082245  437269 command_runner.go:130] > # The certificate for the secure metrics server.
	I1014 19:40:33.082250  437269 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1014 19:40:33.082258  437269 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1014 19:40:33.082263  437269 command_runner.go:130] > # certificate on any modification event.
	I1014 19:40:33.082269  437269 command_runner.go:130] > # metrics_cert = ""
	I1014 19:40:33.082274  437269 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1014 19:40:33.082280  437269 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1014 19:40:33.082284  437269 command_runner.go:130] > # metrics_key = ""
	I1014 19:40:33.082292  437269 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1014 19:40:33.082295  437269 command_runner.go:130] > [crio.tracing]
	I1014 19:40:33.082300  437269 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1014 19:40:33.082306  437269 command_runner.go:130] > # enable_tracing = false
	I1014 19:40:33.082311  437269 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1014 19:40:33.082317  437269 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1014 19:40:33.082324  437269 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1014 19:40:33.082330  437269 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1014 19:40:33.082334  437269 command_runner.go:130] > # CRI-O NRI configuration.
	I1014 19:40:33.082340  437269 command_runner.go:130] > [crio.nri]
	I1014 19:40:33.082345  437269 command_runner.go:130] > # Globally enable or disable NRI.
	I1014 19:40:33.082350  437269 command_runner.go:130] > # enable_nri = true
	I1014 19:40:33.082354  437269 command_runner.go:130] > # NRI socket to listen on.
	I1014 19:40:33.082361  437269 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1014 19:40:33.082365  437269 command_runner.go:130] > # NRI plugin directory to use.
	I1014 19:40:33.082372  437269 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1014 19:40:33.082376  437269 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1014 19:40:33.082383  437269 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1014 19:40:33.082388  437269 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1014 19:40:33.082423  437269 command_runner.go:130] > # nri_disable_connections = false
	I1014 19:40:33.082431  437269 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1014 19:40:33.082435  437269 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1014 19:40:33.082440  437269 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1014 19:40:33.082444  437269 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1014 19:40:33.082451  437269 command_runner.go:130] > # NRI default validator configuration.
	I1014 19:40:33.082457  437269 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1014 19:40:33.082466  437269 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1014 19:40:33.082472  437269 command_runner.go:130] > # can be restricted/rejected:
	I1014 19:40:33.082476  437269 command_runner.go:130] > # - OCI hook injection
	I1014 19:40:33.082483  437269 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1014 19:40:33.082487  437269 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1014 19:40:33.082494  437269 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1014 19:40:33.082498  437269 command_runner.go:130] > # - adjustment of linux namespaces
	I1014 19:40:33.082506  437269 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1014 19:40:33.082514  437269 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1014 19:40:33.082519  437269 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1014 19:40:33.082524  437269 command_runner.go:130] > #
	I1014 19:40:33.082528  437269 command_runner.go:130] > # [crio.nri.default_validator]
	I1014 19:40:33.082535  437269 command_runner.go:130] > # nri_enable_default_validator = false
	I1014 19:40:33.082539  437269 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1014 19:40:33.082546  437269 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1014 19:40:33.082551  437269 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1014 19:40:33.082559  437269 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1014 19:40:33.082564  437269 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1014 19:40:33.082570  437269 command_runner.go:130] > # nri_validator_required_plugins = [
	I1014 19:40:33.082573  437269 command_runner.go:130] > # ]
	I1014 19:40:33.082582  437269 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1014 19:40:33.082587  437269 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1014 19:40:33.082593  437269 command_runner.go:130] > [crio.stats]
	I1014 19:40:33.082598  437269 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1014 19:40:33.082608  437269 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1014 19:40:33.082614  437269 command_runner.go:130] > # stats_collection_period = 0
	I1014 19:40:33.082619  437269 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1014 19:40:33.082628  437269 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1014 19:40:33.082631  437269 command_runner.go:130] > # collection_period = 0
	I1014 19:40:33.082741  437269 cni.go:84] Creating CNI manager for ""
	I1014 19:40:33.082769  437269 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1014 19:40:33.082789  437269 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1014 19:40:33.082811  437269 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-744288 NodeName:functional-744288 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1014 19:40:33.082940  437269 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-744288"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 19:40:33.083002  437269 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1014 19:40:33.091321  437269 command_runner.go:130] > kubeadm
	I1014 19:40:33.091339  437269 command_runner.go:130] > kubectl
	I1014 19:40:33.091351  437269 command_runner.go:130] > kubelet
	I1014 19:40:33.091376  437269 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 19:40:33.091429  437269 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1014 19:40:33.099086  437269 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1014 19:40:33.111962  437269 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 19:40:33.125422  437269 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1014 19:40:33.138383  437269 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1014 19:40:33.142436  437269 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1014 19:40:33.142515  437269 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 19:40:33.229714  437269 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 19:40:33.242948  437269 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288 for IP: 192.168.49.2
	I1014 19:40:33.242967  437269 certs.go:195] generating shared ca certs ...
	I1014 19:40:33.242983  437269 certs.go:227] acquiring lock for ca certs: {Name:mk332cc83f1e1747534fc81e896bed94f24941d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 19:40:33.243111  437269 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.key
	I1014 19:40:33.243147  437269 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.key
	I1014 19:40:33.243157  437269 certs.go:257] generating profile certs ...
	I1014 19:40:33.243244  437269 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/client.key
	I1014 19:40:33.243295  437269 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/apiserver.key.d065d9e2
	I1014 19:40:33.243331  437269 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/proxy-client.key
	I1014 19:40:33.243342  437269 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1014 19:40:33.243354  437269 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1014 19:40:33.243366  437269 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1014 19:40:33.243378  437269 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1014 19:40:33.243389  437269 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1014 19:40:33.243402  437269 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1014 19:40:33.243414  437269 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1014 19:40:33.243426  437269 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1014 19:40:33.243468  437269 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/417373.pem (1338 bytes)
	W1014 19:40:33.243499  437269 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-413763/.minikube/certs/417373_empty.pem, impossibly tiny 0 bytes
	I1014 19:40:33.243509  437269 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca-key.pem (1675 bytes)
	I1014 19:40:33.243528  437269 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem (1078 bytes)
	I1014 19:40:33.243550  437269 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem (1123 bytes)
	I1014 19:40:33.243570  437269 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem (1675 bytes)
	I1014 19:40:33.243605  437269 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem (1708 bytes)
	I1014 19:40:33.243631  437269 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1014 19:40:33.243646  437269 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/417373.pem -> /usr/share/ca-certificates/417373.pem
	I1014 19:40:33.243657  437269 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem -> /usr/share/ca-certificates/4173732.pem
	I1014 19:40:33.244241  437269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 19:40:33.262628  437269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1014 19:40:33.280949  437269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 19:40:33.299645  437269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1014 19:40:33.318581  437269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1014 19:40:33.336772  437269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1014 19:40:33.354893  437269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 19:40:33.372224  437269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1014 19:40:33.389816  437269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 19:40:33.407785  437269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/certs/417373.pem --> /usr/share/ca-certificates/417373.pem (1338 bytes)
	I1014 19:40:33.425006  437269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem --> /usr/share/ca-certificates/4173732.pem (1708 bytes)
	I1014 19:40:33.442414  437269 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 19:40:33.455418  437269 ssh_runner.go:195] Run: openssl version
	I1014 19:40:33.461786  437269 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1014 19:40:33.461878  437269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/417373.pem && ln -fs /usr/share/ca-certificates/417373.pem /etc/ssl/certs/417373.pem"
	I1014 19:40:33.470707  437269 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/417373.pem
	I1014 19:40:33.474930  437269 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct 14 19:32 /usr/share/ca-certificates/417373.pem
	I1014 19:40:33.474991  437269 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 19:32 /usr/share/ca-certificates/417373.pem
	I1014 19:40:33.475040  437269 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/417373.pem
	I1014 19:40:33.510084  437269 command_runner.go:130] > 51391683
	I1014 19:40:33.510386  437269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/417373.pem /etc/ssl/certs/51391683.0"
	I1014 19:40:33.519147  437269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4173732.pem && ln -fs /usr/share/ca-certificates/4173732.pem /etc/ssl/certs/4173732.pem"
	I1014 19:40:33.528110  437269 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4173732.pem
	I1014 19:40:33.532126  437269 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct 14 19:32 /usr/share/ca-certificates/4173732.pem
	I1014 19:40:33.532195  437269 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 19:32 /usr/share/ca-certificates/4173732.pem
	I1014 19:40:33.532237  437269 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4173732.pem
	I1014 19:40:33.566452  437269 command_runner.go:130] > 3ec20f2e
	I1014 19:40:33.566529  437269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4173732.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 19:40:33.575059  437269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 19:40:33.583998  437269 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 19:40:33.587961  437269 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct 14 19:15 /usr/share/ca-certificates/minikubeCA.pem
	I1014 19:40:33.588033  437269 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 19:15 /usr/share/ca-certificates/minikubeCA.pem
	I1014 19:40:33.588081  437269 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 19:40:33.622398  437269 command_runner.go:130] > b5213941
	I1014 19:40:33.622796  437269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 19:40:33.631371  437269 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 19:40:33.635295  437269 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 19:40:33.635320  437269 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1014 19:40:33.635326  437269 command_runner.go:130] > Device: 8,1	Inode: 573968      Links: 1
	I1014 19:40:33.635332  437269 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1014 19:40:33.635341  437269 command_runner.go:130] > Access: 2025-10-14 19:36:24.950222095 +0000
	I1014 19:40:33.635346  437269 command_runner.go:130] > Modify: 2025-10-14 19:32:20.041123235 +0000
	I1014 19:40:33.635350  437269 command_runner.go:130] > Change: 2025-10-14 19:32:20.041123235 +0000
	I1014 19:40:33.635355  437269 command_runner.go:130] >  Birth: 2025-10-14 19:32:20.041123235 +0000
	I1014 19:40:33.635409  437269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1014 19:40:33.669731  437269 command_runner.go:130] > Certificate will not expire
	I1014 19:40:33.670080  437269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1014 19:40:33.705048  437269 command_runner.go:130] > Certificate will not expire
	I1014 19:40:33.705140  437269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1014 19:40:33.739547  437269 command_runner.go:130] > Certificate will not expire
	I1014 19:40:33.739632  437269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1014 19:40:33.774590  437269 command_runner.go:130] > Certificate will not expire
	I1014 19:40:33.774998  437269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1014 19:40:33.810800  437269 command_runner.go:130] > Certificate will not expire
	I1014 19:40:33.810892  437269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1014 19:40:33.846191  437269 command_runner.go:130] > Certificate will not expire
	I1014 19:40:33.846525  437269 kubeadm.go:400] StartCluster: {Name:functional-744288 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-744288 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 19:40:33.846626  437269 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 19:40:33.846701  437269 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 19:40:33.876026  437269 cri.go:89] found id: ""
	I1014 19:40:33.876095  437269 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 19:40:33.883772  437269 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1014 19:40:33.883800  437269 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1014 19:40:33.883806  437269 command_runner.go:130] > /var/lib/minikube/etcd:
	I1014 19:40:33.884383  437269 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1014 19:40:33.884404  437269 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1014 19:40:33.884457  437269 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1014 19:40:33.892144  437269 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1014 19:40:33.892232  437269 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-744288" does not appear in /home/jenkins/minikube-integration/21409-413763/kubeconfig
	I1014 19:40:33.892262  437269 kubeconfig.go:62] /home/jenkins/minikube-integration/21409-413763/kubeconfig needs updating (will repair): [kubeconfig missing "functional-744288" cluster setting kubeconfig missing "functional-744288" context setting]
	I1014 19:40:33.892554  437269 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/kubeconfig: {Name:mk8f52e50cf00a1f90024c40a36e49753857e33b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 19:40:33.893171  437269 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21409-413763/kubeconfig
	I1014 19:40:33.893322  437269 kapi.go:59] client config for functional-744288: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/client.crt", KeyFile:"/home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/client.key", CAFile:"/home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1014 19:40:33.893776  437269 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1014 19:40:33.893798  437269 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1014 19:40:33.893803  437269 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1014 19:40:33.893807  437269 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1014 19:40:33.893810  437269 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1014 19:40:33.893821  437269 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1014 19:40:33.894261  437269 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1014 19:40:33.902475  437269 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1014 19:40:33.902513  437269 kubeadm.go:601] duration metric: took 18.102158ms to restartPrimaryControlPlane
	I1014 19:40:33.902527  437269 kubeadm.go:402] duration metric: took 56.015342ms to StartCluster
	I1014 19:40:33.902549  437269 settings.go:142] acquiring lock: {Name:mke99e63954bc0385c76f9fa1a80091fa7740a6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 19:40:33.902670  437269 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21409-413763/kubeconfig
	I1014 19:40:33.903326  437269 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/kubeconfig: {Name:mk8f52e50cf00a1f90024c40a36e49753857e33b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 19:40:33.903559  437269 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 19:40:33.903636  437269 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1014 19:40:33.903763  437269 addons.go:69] Setting storage-provisioner=true in profile "functional-744288"
	I1014 19:40:33.903782  437269 config.go:182] Loaded profile config "functional-744288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 19:40:33.903793  437269 addons.go:69] Setting default-storageclass=true in profile "functional-744288"
	I1014 19:40:33.903791  437269 addons.go:238] Setting addon storage-provisioner=true in "functional-744288"
	I1014 19:40:33.903828  437269 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-744288"
	I1014 19:40:33.903863  437269 host.go:66] Checking if "functional-744288" exists ...
	I1014 19:40:33.904105  437269 cli_runner.go:164] Run: docker container inspect functional-744288 --format={{.State.Status}}
	I1014 19:40:33.904258  437269 cli_runner.go:164] Run: docker container inspect functional-744288 --format={{.State.Status}}
	I1014 19:40:33.906507  437269 out.go:179] * Verifying Kubernetes components...
	I1014 19:40:33.907562  437269 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 19:40:33.925699  437269 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21409-413763/kubeconfig
	I1014 19:40:33.925934  437269 kapi.go:59] client config for functional-744288: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/client.crt", KeyFile:"/home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/client.key", CAFile:"/home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1014 19:40:33.926358  437269 addons.go:238] Setting addon default-storageclass=true in "functional-744288"
	I1014 19:40:33.926409  437269 host.go:66] Checking if "functional-744288" exists ...
	I1014 19:40:33.926937  437269 cli_runner.go:164] Run: docker container inspect functional-744288 --format={{.State.Status}}
	I1014 19:40:33.928366  437269 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 19:40:33.930195  437269 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 19:40:33.930216  437269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1014 19:40:33.930272  437269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-744288
	I1014 19:40:33.952215  437269 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1014 19:40:33.952244  437269 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1014 19:40:33.952310  437269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-744288
	I1014 19:40:33.956857  437269 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/functional-744288/id_rsa Username:docker}
	I1014 19:40:33.971706  437269 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/functional-744288/id_rsa Username:docker}
	I1014 19:40:34.006948  437269 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 19:40:34.021044  437269 node_ready.go:35] waiting up to 6m0s for node "functional-744288" to be "Ready" ...
	I1014 19:40:34.021181  437269 type.go:168] "Request Body" body=""
	I1014 19:40:34.021246  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:34.021571  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:34.069169  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 19:40:34.082461  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1014 19:40:34.132558  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:40:34.132646  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:34.132686  437269 retry.go:31] will retry after 329.296623ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:34.141809  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:40:34.144515  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:34.144547  437269 retry.go:31] will retry after 261.501781ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:34.407171  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1014 19:40:34.461386  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:40:34.461450  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:34.461492  437269 retry.go:31] will retry after 293.495478ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:34.462464  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 19:40:34.513733  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:40:34.516544  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:34.516582  437269 retry.go:31] will retry after 480.429339ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:34.521783  437269 type.go:168] "Request Body" body=""
	I1014 19:40:34.521866  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:34.522176  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:34.755667  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1014 19:40:34.810676  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:40:34.810724  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:34.810744  437269 retry.go:31] will retry after 614.479011ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:34.998090  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 19:40:35.021962  437269 type.go:168] "Request Body" body=""
	I1014 19:40:35.022038  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:35.022373  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:35.049799  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:40:35.052676  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:35.052709  437269 retry.go:31] will retry after 432.01436ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:35.426352  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1014 19:40:35.482403  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:40:35.482455  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:35.482485  437269 retry.go:31] will retry after 1.057612851s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:35.485602  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 19:40:35.522076  437269 type.go:168] "Request Body" body=""
	I1014 19:40:35.522160  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:35.522499  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:35.537729  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:40:35.540612  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:35.540651  437269 retry.go:31] will retry after 1.151923723s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:36.021224  437269 type.go:168] "Request Body" body=""
	I1014 19:40:36.021306  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:36.021677  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:40:36.021751  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:40:36.521540  437269 type.go:168] "Request Body" body=""
	I1014 19:40:36.521648  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:36.522005  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:36.541250  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1014 19:40:36.596277  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:40:36.596343  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:36.596366  437269 retry.go:31] will retry after 858.341252ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:36.693590  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 19:40:36.746070  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:40:36.749114  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:36.749145  437269 retry.go:31] will retry after 1.225575657s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:37.021547  437269 type.go:168] "Request Body" body=""
	I1014 19:40:37.021641  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:37.022054  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:37.455821  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1014 19:40:37.511587  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:40:37.511647  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:37.511676  437269 retry.go:31] will retry after 1.002490371s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:37.521830  437269 type.go:168] "Request Body" body=""
	I1014 19:40:37.521912  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:37.522269  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:37.974939  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 19:40:38.021626  437269 type.go:168] "Request Body" body=""
	I1014 19:40:38.021748  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:38.022116  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:40:38.022184  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:40:38.027734  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:40:38.030470  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:38.030507  437269 retry.go:31] will retry after 1.025461199s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:38.515193  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1014 19:40:38.521814  437269 type.go:168] "Request Body" body=""
	I1014 19:40:38.521914  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:38.522290  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:38.567735  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:40:38.570434  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:38.570473  437269 retry.go:31] will retry after 1.83061983s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:39.022158  437269 type.go:168] "Request Body" body=""
	I1014 19:40:39.022254  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:39.022656  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:39.056879  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 19:40:39.109896  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:40:39.112847  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:39.112884  437269 retry.go:31] will retry after 3.104822489s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:39.521355  437269 type.go:168] "Request Body" body=""
	I1014 19:40:39.521439  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:39.521856  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:40.021689  437269 type.go:168] "Request Body" body=""
	I1014 19:40:40.021785  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:40.022244  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:40:40.022320  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:40:40.401833  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1014 19:40:40.453343  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:40:40.456347  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:40.456387  437269 retry.go:31] will retry after 3.646877865s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:40.521651  437269 type.go:168] "Request Body" body=""
	I1014 19:40:40.521728  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:40.522111  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:41.021801  437269 type.go:168] "Request Body" body=""
	I1014 19:40:41.021897  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:41.022239  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:41.521918  437269 type.go:168] "Request Body" body=""
	I1014 19:40:41.522016  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:41.522380  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:42.022132  437269 type.go:168] "Request Body" body=""
	I1014 19:40:42.022218  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:42.022586  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:40:42.022649  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:40:42.217895  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 19:40:42.273119  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:40:42.273178  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:42.273199  437269 retry.go:31] will retry after 5.13792128s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:42.521564  437269 type.go:168] "Request Body" body=""
	I1014 19:40:42.521722  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:42.522122  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:43.022026  437269 type.go:168] "Request Body" body=""
	I1014 19:40:43.022112  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:43.022464  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:43.521291  437269 type.go:168] "Request Body" body=""
	I1014 19:40:43.521385  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:43.521849  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:44.021813  437269 type.go:168] "Request Body" body=""
	I1014 19:40:44.021907  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:44.022272  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:44.103502  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1014 19:40:44.156724  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:40:44.159470  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:44.159502  437269 retry.go:31] will retry after 6.372961743s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:44.522197  437269 type.go:168] "Request Body" body=""
	I1014 19:40:44.522316  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:44.522799  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:40:44.522878  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:40:45.021683  437269 type.go:168] "Request Body" body=""
	I1014 19:40:45.021776  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:45.022120  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:45.521709  437269 type.go:168] "Request Body" body=""
	I1014 19:40:45.521833  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:45.522209  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:46.021967  437269 type.go:168] "Request Body" body=""
	I1014 19:40:46.022064  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:46.022441  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:46.522085  437269 type.go:168] "Request Body" body=""
	I1014 19:40:46.522181  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:46.522556  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:47.022210  437269 type.go:168] "Request Body" body=""
	I1014 19:40:47.022296  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:47.022645  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:40:47.022716  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:40:47.412207  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 19:40:47.466705  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:40:47.466772  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:47.466800  437269 retry.go:31] will retry after 6.31356698s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:47.521972  437269 type.go:168] "Request Body" body=""
	I1014 19:40:47.522061  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:47.522426  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:48.022131  437269 type.go:168] "Request Body" body=""
	I1014 19:40:48.022208  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:48.022593  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:48.522267  437269 type.go:168] "Request Body" body=""
	I1014 19:40:48.522351  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:48.522727  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:49.021317  437269 type.go:168] "Request Body" body=""
	I1014 19:40:49.021410  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:49.021831  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:49.521375  437269 type.go:168] "Request Body" body=""
	I1014 19:40:49.521474  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:49.521884  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:40:49.521959  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:40:50.021803  437269 type.go:168] "Request Body" body=""
	I1014 19:40:50.021896  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:50.022319  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:50.521972  437269 type.go:168] "Request Body" body=""
	I1014 19:40:50.522068  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:50.522461  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:50.533648  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1014 19:40:50.590568  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:40:50.590621  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:50.590649  437269 retry.go:31] will retry after 8.10133009s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:51.022238  437269 type.go:168] "Request Body" body=""
	I1014 19:40:51.022324  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:51.022671  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:51.521259  437269 type.go:168] "Request Body" body=""
	I1014 19:40:51.521354  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:51.521737  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:52.021339  437269 type.go:168] "Request Body" body=""
	I1014 19:40:52.021436  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:52.021838  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:40:52.021911  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:40:52.521431  437269 type.go:168] "Request Body" body=""
	I1014 19:40:52.521523  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:52.521914  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:53.021515  437269 type.go:168] "Request Body" body=""
	I1014 19:40:53.021632  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:53.022015  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:53.521582  437269 type.go:168] "Request Body" body=""
	I1014 19:40:53.521689  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:53.522061  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:53.781554  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 19:40:53.838039  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:40:53.838101  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:53.838128  437269 retry.go:31] will retry after 9.837531091s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:54.021666  437269 type.go:168] "Request Body" body=""
	I1014 19:40:54.021771  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:54.022166  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:40:54.022235  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:40:54.521778  437269 type.go:168] "Request Body" body=""
	I1014 19:40:54.521864  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:54.522222  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:55.022074  437269 type.go:168] "Request Body" body=""
	I1014 19:40:55.022163  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:55.022522  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:55.522140  437269 type.go:168] "Request Body" body=""
	I1014 19:40:55.522219  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:55.522653  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:56.021265  437269 type.go:168] "Request Body" body=""
	I1014 19:40:56.021344  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:56.021726  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:56.521342  437269 type.go:168] "Request Body" body=""
	I1014 19:40:56.521439  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:56.521872  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:40:56.521945  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:40:57.021424  437269 type.go:168] "Request Body" body=""
	I1014 19:40:57.021552  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:57.021974  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:57.521651  437269 type.go:168] "Request Body" body=""
	I1014 19:40:57.521797  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:57.522216  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:58.021903  437269 type.go:168] "Request Body" body=""
	I1014 19:40:58.021987  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:58.022398  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:58.522085  437269 type.go:168] "Request Body" body=""
	I1014 19:40:58.522169  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:58.522556  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:40:58.522630  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:40:58.692921  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1014 19:40:58.746193  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:40:58.749262  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:58.749295  437269 retry.go:31] will retry after 17.735335575s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:40:59.021769  437269 type.go:168] "Request Body" body=""
	I1014 19:40:59.021862  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:59.022229  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:40:59.521888  437269 type.go:168] "Request Body" body=""
	I1014 19:40:59.522001  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:40:59.522349  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:00.021702  437269 type.go:168] "Request Body" body=""
	I1014 19:41:00.021801  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:00.022202  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:00.522173  437269 type.go:168] "Request Body" body=""
	I1014 19:41:00.522273  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:00.522632  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:00.522721  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:01.021455  437269 type.go:168] "Request Body" body=""
	I1014 19:41:01.021548  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:01.021937  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:01.521738  437269 type.go:168] "Request Body" body=""
	I1014 19:41:01.521858  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:01.522279  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:02.022194  437269 type.go:168] "Request Body" body=""
	I1014 19:41:02.022289  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:02.022725  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:02.521517  437269 type.go:168] "Request Body" body=""
	I1014 19:41:02.521656  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:02.522050  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:03.021919  437269 type.go:168] "Request Body" body=""
	I1014 19:41:03.022009  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:03.022403  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:03.022475  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:03.522212  437269 type.go:168] "Request Body" body=""
	I1014 19:41:03.522291  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:03.522659  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:03.675962  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 19:41:03.727887  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:41:03.730521  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:41:03.730562  437269 retry.go:31] will retry after 19.438885547s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:41:04.022253  437269 type.go:168] "Request Body" body=""
	I1014 19:41:04.022379  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:04.022809  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:04.521663  437269 type.go:168] "Request Body" body=""
	I1014 19:41:04.521794  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:04.522180  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:05.021978  437269 type.go:168] "Request Body" body=""
	I1014 19:41:05.022063  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:05.022412  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:05.522231  437269 type.go:168] "Request Body" body=""
	I1014 19:41:05.522314  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:05.522655  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:05.522732  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:06.021349  437269 type.go:168] "Request Body" body=""
	I1014 19:41:06.021429  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:06.021828  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:06.521569  437269 type.go:168] "Request Body" body=""
	I1014 19:41:06.521651  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:06.522040  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:07.021907  437269 type.go:168] "Request Body" body=""
	I1014 19:41:07.021993  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:07.022361  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:07.522243  437269 type.go:168] "Request Body" body=""
	I1014 19:41:07.522333  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:07.522720  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:07.522816  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:08.021308  437269 type.go:168] "Request Body" body=""
	I1014 19:41:08.021386  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:08.021750  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:08.521638  437269 type.go:168] "Request Body" body=""
	I1014 19:41:08.521722  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:08.522125  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:09.021981  437269 type.go:168] "Request Body" body=""
	I1014 19:41:09.022069  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:09.022464  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:09.521240  437269 type.go:168] "Request Body" body=""
	I1014 19:41:09.521389  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:09.521793  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:10.021609  437269 type.go:168] "Request Body" body=""
	I1014 19:41:10.021695  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:10.022108  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:10.022177  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:10.522050  437269 type.go:168] "Request Body" body=""
	I1014 19:41:10.522140  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:10.522549  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:11.021354  437269 type.go:168] "Request Body" body=""
	I1014 19:41:11.021435  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:11.021862  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:11.521641  437269 type.go:168] "Request Body" body=""
	I1014 19:41:11.521740  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:11.522168  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:12.022028  437269 type.go:168] "Request Body" body=""
	I1014 19:41:12.022114  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:12.022483  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:12.022549  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:12.521254  437269 type.go:168] "Request Body" body=""
	I1014 19:41:12.521342  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:12.521740  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:13.021557  437269 type.go:168] "Request Body" body=""
	I1014 19:41:13.021642  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:13.022039  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:13.521864  437269 type.go:168] "Request Body" body=""
	I1014 19:41:13.521953  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:13.522323  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:14.022194  437269 type.go:168] "Request Body" body=""
	I1014 19:41:14.022287  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:14.022654  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:14.022724  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:14.521434  437269 type.go:168] "Request Body" body=""
	I1014 19:41:14.521526  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:14.521992  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:15.021751  437269 type.go:168] "Request Body" body=""
	I1014 19:41:15.021849  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:15.022211  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:15.522050  437269 type.go:168] "Request Body" body=""
	I1014 19:41:15.522133  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:15.522522  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:16.021287  437269 type.go:168] "Request Body" body=""
	I1014 19:41:16.021373  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:16.021781  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:16.485413  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1014 19:41:16.522201  437269 type.go:168] "Request Body" body=""
	I1014 19:41:16.522279  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:16.522623  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:16.522694  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:16.537285  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:41:16.540211  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:41:16.540239  437269 retry.go:31] will retry after 23.522391633s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:41:17.021909  437269 type.go:168] "Request Body" body=""
	I1014 19:41:17.022015  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:17.022407  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:17.522283  437269 type.go:168] "Request Body" body=""
	I1014 19:41:17.522380  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:17.522743  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:18.021576  437269 type.go:168] "Request Body" body=""
	I1014 19:41:18.021671  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:18.022118  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:18.522003  437269 type.go:168] "Request Body" body=""
	I1014 19:41:18.522089  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:18.522516  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:19.021291  437269 type.go:168] "Request Body" body=""
	I1014 19:41:19.021372  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:19.021747  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:19.021855  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:19.521591  437269 type.go:168] "Request Body" body=""
	I1014 19:41:19.521674  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:19.522078  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:20.021898  437269 type.go:168] "Request Body" body=""
	I1014 19:41:20.021987  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:20.022480  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:20.521321  437269 type.go:168] "Request Body" body=""
	I1014 19:41:20.521403  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:20.521841  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:21.021619  437269 type.go:168] "Request Body" body=""
	I1014 19:41:21.021721  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:21.022173  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:21.022242  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:21.522084  437269 type.go:168] "Request Body" body=""
	I1014 19:41:21.522176  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:21.522550  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:22.021344  437269 type.go:168] "Request Body" body=""
	I1014 19:41:22.021423  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:22.021877  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:22.521680  437269 type.go:168] "Request Body" body=""
	I1014 19:41:22.521784  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:22.522158  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:23.022009  437269 type.go:168] "Request Body" body=""
	I1014 19:41:23.022088  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:23.022489  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:23.022557  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:23.169796  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 19:41:23.227015  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:41:23.227096  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:41:23.227121  437269 retry.go:31] will retry after 24.705053737s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:41:23.521443  437269 type.go:168] "Request Body" body=""
	I1014 19:41:23.521533  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:23.522057  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:24.021980  437269 type.go:168] "Request Body" body=""
	I1014 19:41:24.022087  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:24.022457  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:24.522136  437269 type.go:168] "Request Body" body=""
	I1014 19:41:24.522235  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:24.522578  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:25.021598  437269 type.go:168] "Request Body" body=""
	I1014 19:41:25.021741  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:25.022116  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:25.521746  437269 type.go:168] "Request Body" body=""
	I1014 19:41:25.521865  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:25.522300  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:25.522363  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:26.021980  437269 type.go:168] "Request Body" body=""
	I1014 19:41:26.022056  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:26.022462  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:26.522116  437269 type.go:168] "Request Body" body=""
	I1014 19:41:26.522205  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:26.522581  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:27.022289  437269 type.go:168] "Request Body" body=""
	I1014 19:41:27.022379  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:27.022735  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:27.521368  437269 type.go:168] "Request Body" body=""
	I1014 19:41:27.521454  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:27.521879  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:28.021445  437269 type.go:168] "Request Body" body=""
	I1014 19:41:28.021545  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:28.021931  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:28.021996  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:28.521541  437269 type.go:168] "Request Body" body=""
	I1014 19:41:28.521630  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:28.522060  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:29.021664  437269 type.go:168] "Request Body" body=""
	I1014 19:41:29.021774  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:29.022227  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:29.521894  437269 type.go:168] "Request Body" body=""
	I1014 19:41:29.521983  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:29.522351  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:30.022245  437269 type.go:168] "Request Body" body=""
	I1014 19:41:30.022327  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:30.022707  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:30.022824  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:30.521424  437269 type.go:168] "Request Body" body=""
	I1014 19:41:30.521529  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:30.521982  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:31.021342  437269 type.go:168] "Request Body" body=""
	I1014 19:41:31.021429  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:31.021899  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:31.521503  437269 type.go:168] "Request Body" body=""
	I1014 19:41:31.521595  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:31.522014  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:32.021616  437269 type.go:168] "Request Body" body=""
	I1014 19:41:32.021705  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:32.022095  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:32.521679  437269 type.go:168] "Request Body" body=""
	I1014 19:41:32.521783  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:32.522156  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:32.522231  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:33.021778  437269 type.go:168] "Request Body" body=""
	I1014 19:41:33.021859  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:33.022214  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:33.521935  437269 type.go:168] "Request Body" body=""
	I1014 19:41:33.522024  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:33.522446  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:34.021233  437269 type.go:168] "Request Body" body=""
	I1014 19:41:34.021316  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:34.021702  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:34.521364  437269 type.go:168] "Request Body" body=""
	I1014 19:41:34.521444  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:34.521880  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:35.021696  437269 type.go:168] "Request Body" body=""
	I1014 19:41:35.021799  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:35.022177  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:35.022244  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:35.521929  437269 type.go:168] "Request Body" body=""
	I1014 19:41:35.522017  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:35.522385  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:36.022241  437269 type.go:168] "Request Body" body=""
	I1014 19:41:36.022330  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:36.022808  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:36.521609  437269 type.go:168] "Request Body" body=""
	I1014 19:41:36.521699  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:36.522099  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:37.021877  437269 type.go:168] "Request Body" body=""
	I1014 19:41:37.021957  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:37.022344  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:37.022414  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:37.522189  437269 type.go:168] "Request Body" body=""
	I1014 19:41:37.522279  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:37.522617  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:38.021362  437269 type.go:168] "Request Body" body=""
	I1014 19:41:38.021440  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:38.021855  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:38.521628  437269 type.go:168] "Request Body" body=""
	I1014 19:41:38.521722  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:38.522097  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:39.021917  437269 type.go:168] "Request Body" body=""
	I1014 19:41:39.022012  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:39.022384  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:39.022447  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:39.522314  437269 type.go:168] "Request Body" body=""
	I1014 19:41:39.522401  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:39.522788  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:40.021745  437269 type.go:168] "Request Body" body=""
	I1014 19:41:40.021857  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:40.022236  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:40.063502  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1014 19:41:40.119488  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:41:40.119566  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:41:40.119604  437269 retry.go:31] will retry after 34.554126144s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:41:40.522218  437269 type.go:168] "Request Body" body=""
	I1014 19:41:40.522383  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:40.522878  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:41.021513  437269 type.go:168] "Request Body" body=""
	I1014 19:41:41.021597  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:41.021974  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:41.521785  437269 type.go:168] "Request Body" body=""
	I1014 19:41:41.521864  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:41.522250  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:41.522330  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:42.022203  437269 type.go:168] "Request Body" body=""
	I1014 19:41:42.022322  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:42.022810  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:42.521587  437269 type.go:168] "Request Body" body=""
	I1014 19:41:42.521669  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:42.522059  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:43.021981  437269 type.go:168] "Request Body" body=""
	I1014 19:41:43.022074  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:43.022442  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:43.521224  437269 type.go:168] "Request Body" body=""
	I1014 19:41:43.521304  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:43.521705  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:44.021370  437269 type.go:168] "Request Body" body=""
	I1014 19:41:44.021454  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:44.021888  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:44.021956  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:44.521703  437269 type.go:168] "Request Body" body=""
	I1014 19:41:44.521821  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:44.522229  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:45.022076  437269 type.go:168] "Request Body" body=""
	I1014 19:41:45.022158  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:45.022500  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:45.521283  437269 type.go:168] "Request Body" body=""
	I1014 19:41:45.521372  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:45.521787  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:46.021585  437269 type.go:168] "Request Body" body=""
	I1014 19:41:46.021687  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:46.022067  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:46.022144  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:46.521959  437269 type.go:168] "Request Body" body=""
	I1014 19:41:46.522047  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:46.522400  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:47.022244  437269 type.go:168] "Request Body" body=""
	I1014 19:41:47.022326  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:47.022720  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:47.521502  437269 type.go:168] "Request Body" body=""
	I1014 19:41:47.521586  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:47.521971  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:47.932453  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 19:41:47.984361  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:41:47.987254  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:41:47.987292  437269 retry.go:31] will retry after 37.673790461s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:41:48.021563  437269 type.go:168] "Request Body" body=""
	I1014 19:41:48.021661  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:48.022072  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:48.521661  437269 type.go:168] "Request Body" body=""
	I1014 19:41:48.521746  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:48.522153  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:48.522222  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:49.021778  437269 type.go:168] "Request Body" body=""
	I1014 19:41:49.021869  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:49.022246  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:49.521919  437269 type.go:168] "Request Body" body=""
	I1014 19:41:49.521999  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:49.522366  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:50.021911  437269 type.go:168] "Request Body" body=""
	I1014 19:41:50.021996  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:50.022358  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:50.522021  437269 type.go:168] "Request Body" body=""
	I1014 19:41:50.522121  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:50.522513  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:50.522647  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:51.022257  437269 type.go:168] "Request Body" body=""
	I1014 19:41:51.022355  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:51.022711  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:51.521301  437269 type.go:168] "Request Body" body=""
	I1014 19:41:51.521377  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:51.521820  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:52.021365  437269 type.go:168] "Request Body" body=""
	I1014 19:41:52.021447  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:52.021844  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:52.521373  437269 type.go:168] "Request Body" body=""
	I1014 19:41:52.521451  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:52.521825  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:53.021413  437269 type.go:168] "Request Body" body=""
	I1014 19:41:53.021513  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:53.021940  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:53.022029  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:53.521560  437269 type.go:168] "Request Body" body=""
	I1014 19:41:53.521663  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:53.522072  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:54.021872  437269 type.go:168] "Request Body" body=""
	I1014 19:41:54.021964  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:54.022312  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:54.521983  437269 type.go:168] "Request Body" body=""
	I1014 19:41:54.522067  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:54.522484  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:55.021263  437269 type.go:168] "Request Body" body=""
	I1014 19:41:55.021357  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:55.021747  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:55.521288  437269 type.go:168] "Request Body" body=""
	I1014 19:41:55.521376  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:55.521739  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:55.521840  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:56.021322  437269 type.go:168] "Request Body" body=""
	I1014 19:41:56.021409  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:56.021840  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:56.521370  437269 type.go:168] "Request Body" body=""
	I1014 19:41:56.521452  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:56.521831  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:57.021963  437269 type.go:168] "Request Body" body=""
	I1014 19:41:57.022041  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:57.022397  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:57.522061  437269 type.go:168] "Request Body" body=""
	I1014 19:41:57.522137  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:57.522480  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:41:57.522553  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:41:58.022151  437269 type.go:168] "Request Body" body=""
	I1014 19:41:58.022236  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:58.022597  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:58.522240  437269 type.go:168] "Request Body" body=""
	I1014 19:41:58.522322  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:58.522668  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:59.021247  437269 type.go:168] "Request Body" body=""
	I1014 19:41:59.021333  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:59.021717  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:41:59.521251  437269 type.go:168] "Request Body" body=""
	I1014 19:41:59.521330  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:41:59.521703  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:00.021653  437269 type.go:168] "Request Body" body=""
	I1014 19:42:00.021752  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:00.022142  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:00.022220  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:00.522036  437269 type.go:168] "Request Body" body=""
	I1014 19:42:00.522123  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:00.522466  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:01.022199  437269 type.go:168] "Request Body" body=""
	I1014 19:42:01.022290  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:01.022633  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:01.521196  437269 type.go:168] "Request Body" body=""
	I1014 19:42:01.521278  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:01.521637  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:02.022258  437269 type.go:168] "Request Body" body=""
	I1014 19:42:02.022335  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:02.022740  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:02.022848  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:02.521321  437269 type.go:168] "Request Body" body=""
	I1014 19:42:02.521405  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:02.521800  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:03.021313  437269 type.go:168] "Request Body" body=""
	I1014 19:42:03.021392  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:03.021749  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:03.521348  437269 type.go:168] "Request Body" body=""
	I1014 19:42:03.521443  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:03.521938  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:04.021944  437269 type.go:168] "Request Body" body=""
	I1014 19:42:04.022035  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:04.022414  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:04.522132  437269 type.go:168] "Request Body" body=""
	I1014 19:42:04.522227  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:04.522582  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:04.522653  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:05.021481  437269 type.go:168] "Request Body" body=""
	I1014 19:42:05.021561  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:05.021905  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:05.521556  437269 type.go:168] "Request Body" body=""
	I1014 19:42:05.521637  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:05.522027  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:06.021613  437269 type.go:168] "Request Body" body=""
	I1014 19:42:06.021699  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:06.022057  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:06.521633  437269 type.go:168] "Request Body" body=""
	I1014 19:42:06.521719  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:06.522075  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:07.021749  437269 type.go:168] "Request Body" body=""
	I1014 19:42:07.021848  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:07.022194  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:07.022260  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:07.521871  437269 type.go:168] "Request Body" body=""
	I1014 19:42:07.521957  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:07.522300  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:08.021955  437269 type.go:168] "Request Body" body=""
	I1014 19:42:08.022031  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:08.022379  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:08.522039  437269 type.go:168] "Request Body" body=""
	I1014 19:42:08.522117  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:08.522476  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:09.022164  437269 type.go:168] "Request Body" body=""
	I1014 19:42:09.022254  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:09.022634  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:09.022701  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:09.521239  437269 type.go:168] "Request Body" body=""
	I1014 19:42:09.521333  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:09.521715  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:10.021732  437269 type.go:168] "Request Body" body=""
	I1014 19:42:10.021859  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:10.022260  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:10.521865  437269 type.go:168] "Request Body" body=""
	I1014 19:42:10.521952  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:10.522296  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:11.021963  437269 type.go:168] "Request Body" body=""
	I1014 19:42:11.022051  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:11.022419  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:11.522129  437269 type.go:168] "Request Body" body=""
	I1014 19:42:11.522219  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:11.522604  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:11.522681  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:12.022256  437269 type.go:168] "Request Body" body=""
	I1014 19:42:12.022343  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:12.022700  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:12.521278  437269 type.go:168] "Request Body" body=""
	I1014 19:42:12.521359  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:12.521732  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:13.022114  437269 type.go:168] "Request Body" body=""
	I1014 19:42:13.022198  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:13.022561  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:13.522240  437269 type.go:168] "Request Body" body=""
	I1014 19:42:13.522319  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:13.522711  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:13.522798  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:14.021579  437269 type.go:168] "Request Body" body=""
	I1014 19:42:14.021707  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:14.022154  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:14.521710  437269 type.go:168] "Request Body" body=""
	I1014 19:42:14.521880  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:14.522225  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:14.674573  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1014 19:42:14.729085  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:42:14.729138  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:42:14.729273  437269 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1014 19:42:15.021737  437269 type.go:168] "Request Body" body=""
	I1014 19:42:15.021834  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:15.022205  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:15.521930  437269 type.go:168] "Request Body" body=""
	I1014 19:42:15.522012  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:15.522372  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:16.022056  437269 type.go:168] "Request Body" body=""
	I1014 19:42:16.022143  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:16.022542  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:16.022609  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:16.522173  437269 type.go:168] "Request Body" body=""
	I1014 19:42:16.522253  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:16.522604  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:17.021294  437269 type.go:168] "Request Body" body=""
	I1014 19:42:17.021370  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:17.021733  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:17.521444  437269 type.go:168] "Request Body" body=""
	I1014 19:42:17.521548  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:17.521910  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:18.022124  437269 type.go:168] "Request Body" body=""
	I1014 19:42:18.022209  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:18.022551  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:18.022636  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:18.522199  437269 type.go:168] "Request Body" body=""
	I1014 19:42:18.522276  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:18.522605  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:19.022258  437269 type.go:168] "Request Body" body=""
	I1014 19:42:19.022337  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:19.022731  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:19.521317  437269 type.go:168] "Request Body" body=""
	I1014 19:42:19.521448  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:19.521836  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:20.021610  437269 type.go:168] "Request Body" body=""
	I1014 19:42:20.021710  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:20.022103  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:20.521709  437269 type.go:168] "Request Body" body=""
	I1014 19:42:20.521810  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:20.522173  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:20.522240  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:21.021782  437269 type.go:168] "Request Body" body=""
	I1014 19:42:21.021881  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:21.022300  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:21.521996  437269 type.go:168] "Request Body" body=""
	I1014 19:42:21.522075  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:21.522493  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:22.022092  437269 type.go:168] "Request Body" body=""
	I1014 19:42:22.022170  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:22.022570  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:22.522183  437269 type.go:168] "Request Body" body=""
	I1014 19:42:22.522272  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:22.522625  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:22.522688  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:23.021971  437269 type.go:168] "Request Body" body=""
	I1014 19:42:23.022063  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:23.022422  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:23.522081  437269 type.go:168] "Request Body" body=""
	I1014 19:42:23.522162  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:23.522509  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:24.022288  437269 type.go:168] "Request Body" body=""
	I1014 19:42:24.022385  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:24.022833  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:24.521351  437269 type.go:168] "Request Body" body=""
	I1014 19:42:24.521424  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:24.521791  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:25.021730  437269 type.go:168] "Request Body" body=""
	I1014 19:42:25.021831  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:25.022212  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:25.022288  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:25.521848  437269 type.go:168] "Request Body" body=""
	I1014 19:42:25.521952  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:25.522288  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:25.661672  437269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 19:42:25.715017  437269 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:42:25.717809  437269 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 19:42:25.717938  437269 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1014 19:42:25.719888  437269 out.go:179] * Enabled addons: 
	I1014 19:42:25.722455  437269 addons.go:514] duration metric: took 1m51.818834592s for enable addons: enabled=[]
	I1014 19:42:26.021269  437269 type.go:168] "Request Body" body=""
	I1014 19:42:26.021349  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:26.021816  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:26.521369  437269 type.go:168] "Request Body" body=""
	I1014 19:42:26.521477  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:26.521916  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:27.021507  437269 type.go:168] "Request Body" body=""
	I1014 19:42:27.021605  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:27.021991  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:27.521602  437269 type.go:168] "Request Body" body=""
	I1014 19:42:27.521721  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:27.522084  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:27.522146  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:28.021642  437269 type.go:168] "Request Body" body=""
	I1014 19:42:28.021743  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:28.022116  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:28.521702  437269 type.go:168] "Request Body" body=""
	I1014 19:42:28.521807  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:28.522163  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:29.021797  437269 type.go:168] "Request Body" body=""
	I1014 19:42:29.021903  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:29.022267  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:29.522074  437269 type.go:168] "Request Body" body=""
	I1014 19:42:29.522173  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:29.522553  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:29.522671  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:30.021560  437269 type.go:168] "Request Body" body=""
	I1014 19:42:30.021654  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:30.022115  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:30.521649  437269 type.go:168] "Request Body" body=""
	I1014 19:42:30.521743  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:30.522178  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:31.021725  437269 type.go:168] "Request Body" body=""
	I1014 19:42:31.021826  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:31.022186  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:31.521880  437269 type.go:168] "Request Body" body=""
	I1014 19:42:31.521996  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:31.522379  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:32.021983  437269 type.go:168] "Request Body" body=""
	I1014 19:42:32.022060  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:32.022435  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:32.022510  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:32.522077  437269 type.go:168] "Request Body" body=""
	I1014 19:42:32.522170  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:32.522524  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:33.022165  437269 type.go:168] "Request Body" body=""
	I1014 19:42:33.022248  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:33.022592  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:33.521797  437269 type.go:168] "Request Body" body=""
	I1014 19:42:33.522204  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:33.522657  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:34.021345  437269 type.go:168] "Request Body" body=""
	I1014 19:42:34.021435  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:34.021864  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:34.521442  437269 type.go:168] "Request Body" body=""
	I1014 19:42:34.521536  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:34.521932  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:34.522018  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:35.021950  437269 type.go:168] "Request Body" body=""
	I1014 19:42:35.022028  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:35.022451  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:35.521247  437269 type.go:168] "Request Body" body=""
	I1014 19:42:35.521354  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:35.521837  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:36.021379  437269 type.go:168] "Request Body" body=""
	I1014 19:42:36.021471  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:36.021855  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:36.521476  437269 type.go:168] "Request Body" body=""
	I1014 19:42:36.521569  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:36.521989  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:36.522059  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:37.021550  437269 type.go:168] "Request Body" body=""
	I1014 19:42:37.021627  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:37.022016  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:37.521641  437269 type.go:168] "Request Body" body=""
	I1014 19:42:37.521743  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:37.522187  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:38.021859  437269 type.go:168] "Request Body" body=""
	I1014 19:42:38.021939  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:38.022324  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:38.521989  437269 type.go:168] "Request Body" body=""
	I1014 19:42:38.522080  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:38.522434  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:38.522503  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:39.022081  437269 type.go:168] "Request Body" body=""
	I1014 19:42:39.022165  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:39.022503  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:39.522189  437269 type.go:168] "Request Body" body=""
	I1014 19:42:39.522287  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:39.522650  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:40.021651  437269 type.go:168] "Request Body" body=""
	I1014 19:42:40.021735  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:40.022128  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:40.521658  437269 type.go:168] "Request Body" body=""
	I1014 19:42:40.521778  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:40.522143  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:41.021691  437269 type.go:168] "Request Body" body=""
	I1014 19:42:41.021793  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:41.022157  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:41.022225  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:41.521808  437269 type.go:168] "Request Body" body=""
	I1014 19:42:41.521901  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:41.522267  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:42.021874  437269 type.go:168] "Request Body" body=""
	I1014 19:42:42.021955  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:42.022329  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:42.521975  437269 type.go:168] "Request Body" body=""
	I1014 19:42:42.522059  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:42.522405  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:43.022032  437269 type.go:168] "Request Body" body=""
	I1014 19:42:43.022120  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:43.022486  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:43.022552  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:43.522253  437269 type.go:168] "Request Body" body=""
	I1014 19:42:43.522342  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:43.522709  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:44.021548  437269 type.go:168] "Request Body" body=""
	I1014 19:42:44.021646  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:44.022079  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:44.521677  437269 type.go:168] "Request Body" body=""
	I1014 19:42:44.521784  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:44.522202  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:45.022110  437269 type.go:168] "Request Body" body=""
	I1014 19:42:45.022196  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:45.022558  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:45.022661  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:45.522180  437269 type.go:168] "Request Body" body=""
	I1014 19:42:45.522266  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:45.522677  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:46.021229  437269 type.go:168] "Request Body" body=""
	I1014 19:42:46.021324  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:46.021716  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:46.521270  437269 type.go:168] "Request Body" body=""
	I1014 19:42:46.521348  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:46.521722  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:47.021311  437269 type.go:168] "Request Body" body=""
	I1014 19:42:47.021390  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:47.021779  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:47.521354  437269 type.go:168] "Request Body" body=""
	I1014 19:42:47.521433  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:47.521823  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:47.521900  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:48.021360  437269 type.go:168] "Request Body" body=""
	I1014 19:42:48.021443  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:48.021837  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:48.521366  437269 type.go:168] "Request Body" body=""
	I1014 19:42:48.521469  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:48.521854  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:49.022003  437269 type.go:168] "Request Body" body=""
	I1014 19:42:49.022085  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:49.022428  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:49.522046  437269 type.go:168] "Request Body" body=""
	I1014 19:42:49.522124  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:49.522478  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:49.522562  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:50.021433  437269 type.go:168] "Request Body" body=""
	I1014 19:42:50.021542  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:50.021987  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:50.521590  437269 type.go:168] "Request Body" body=""
	I1014 19:42:50.521671  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:50.521991  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:51.021671  437269 type.go:168] "Request Body" body=""
	I1014 19:42:51.021778  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:51.022149  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:51.521719  437269 type.go:168] "Request Body" body=""
	I1014 19:42:51.521832  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:51.522215  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:52.021893  437269 type.go:168] "Request Body" body=""
	I1014 19:42:52.021987  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:52.022342  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:52.022411  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:52.522080  437269 type.go:168] "Request Body" body=""
	I1014 19:42:52.522183  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:52.522617  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:53.022238  437269 type.go:168] "Request Body" body=""
	I1014 19:42:53.022323  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:53.022716  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:53.521304  437269 type.go:168] "Request Body" body=""
	I1014 19:42:53.521390  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:53.521854  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:54.021685  437269 type.go:168] "Request Body" body=""
	I1014 19:42:54.021789  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:54.022166  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:54.521747  437269 type.go:168] "Request Body" body=""
	I1014 19:42:54.521851  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:54.522275  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:54.522352  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:55.022087  437269 type.go:168] "Request Body" body=""
	I1014 19:42:55.022177  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:55.022557  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:55.522187  437269 type.go:168] "Request Body" body=""
	I1014 19:42:55.522285  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:55.522718  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:56.021281  437269 type.go:168] "Request Body" body=""
	I1014 19:42:56.021383  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:56.021840  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:56.521354  437269 type.go:168] "Request Body" body=""
	I1014 19:42:56.521430  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:56.521815  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:57.021386  437269 type.go:168] "Request Body" body=""
	I1014 19:42:57.021483  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:57.021914  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:57.021999  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:57.521600  437269 type.go:168] "Request Body" body=""
	I1014 19:42:57.521687  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:57.522087  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:58.021700  437269 type.go:168] "Request Body" body=""
	I1014 19:42:58.021799  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:58.022207  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:58.521870  437269 type.go:168] "Request Body" body=""
	I1014 19:42:58.521949  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:58.522303  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:42:59.021970  437269 type.go:168] "Request Body" body=""
	I1014 19:42:59.022045  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:59.022443  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:42:59.022507  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:42:59.522038  437269 type.go:168] "Request Body" body=""
	I1014 19:42:59.522131  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:42:59.522484  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:00.021506  437269 type.go:168] "Request Body" body=""
	I1014 19:43:00.021597  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:00.021981  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:00.521539  437269 type.go:168] "Request Body" body=""
	I1014 19:43:00.521625  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:00.522005  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:01.021567  437269 type.go:168] "Request Body" body=""
	I1014 19:43:01.021646  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:01.022034  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:01.521607  437269 type.go:168] "Request Body" body=""
	I1014 19:43:01.521699  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:01.522086  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:01.522169  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:02.021674  437269 type.go:168] "Request Body" body=""
	I1014 19:43:02.021771  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:02.022118  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:02.521701  437269 type.go:168] "Request Body" body=""
	I1014 19:43:02.521802  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:02.522123  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:03.021671  437269 type.go:168] "Request Body" body=""
	I1014 19:43:03.021748  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:03.022117  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:03.521807  437269 type.go:168] "Request Body" body=""
	I1014 19:43:03.521898  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:03.522297  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:03.522377  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:04.021247  437269 type.go:168] "Request Body" body=""
	I1014 19:43:04.021333  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:04.021730  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:04.521290  437269 type.go:168] "Request Body" body=""
	I1014 19:43:04.521389  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:04.521814  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:05.021660  437269 type.go:168] "Request Body" body=""
	I1014 19:43:05.021743  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:05.022150  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:05.521749  437269 type.go:168] "Request Body" body=""
	I1014 19:43:05.521888  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:05.522240  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:06.021896  437269 type.go:168] "Request Body" body=""
	I1014 19:43:06.021996  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:06.022415  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:06.022501  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:06.522060  437269 type.go:168] "Request Body" body=""
	I1014 19:43:06.522142  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:06.522496  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:07.022152  437269 type.go:168] "Request Body" body=""
	I1014 19:43:07.022255  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:07.022672  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:07.521243  437269 type.go:168] "Request Body" body=""
	I1014 19:43:07.521325  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:07.521730  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:08.021306  437269 type.go:168] "Request Body" body=""
	I1014 19:43:08.021386  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:08.021797  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:08.521379  437269 type.go:168] "Request Body" body=""
	I1014 19:43:08.521475  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:08.521854  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:08.521921  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:09.021427  437269 type.go:168] "Request Body" body=""
	I1014 19:43:09.021525  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:09.021943  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:09.521610  437269 type.go:168] "Request Body" body=""
	I1014 19:43:09.521709  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:09.522074  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:10.021890  437269 type.go:168] "Request Body" body=""
	I1014 19:43:10.021973  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:10.022317  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:10.522040  437269 type.go:168] "Request Body" body=""
	I1014 19:43:10.522122  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:10.522464  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:10.522545  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:11.021678  437269 type.go:168] "Request Body" body=""
	I1014 19:43:11.021775  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:11.022124  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:11.521786  437269 type.go:168] "Request Body" body=""
	I1014 19:43:11.521865  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:11.522285  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:12.021630  437269 type.go:168] "Request Body" body=""
	I1014 19:43:12.021721  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:12.022083  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:12.521655  437269 type.go:168] "Request Body" body=""
	I1014 19:43:12.521751  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:12.522185  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:13.021857  437269 type.go:168] "Request Body" body=""
	I1014 19:43:13.021947  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:13.022329  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:13.022419  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:13.521998  437269 type.go:168] "Request Body" body=""
	I1014 19:43:13.522076  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:13.522428  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:14.022232  437269 type.go:168] "Request Body" body=""
	I1014 19:43:14.022315  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:14.022692  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:14.521299  437269 type.go:168] "Request Body" body=""
	I1014 19:43:14.521379  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:14.521818  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:15.021769  437269 type.go:168] "Request Body" body=""
	I1014 19:43:15.021869  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:15.022238  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:15.521883  437269 type.go:168] "Request Body" body=""
	I1014 19:43:15.521969  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:15.522302  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:15.522372  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:16.021990  437269 type.go:168] "Request Body" body=""
	I1014 19:43:16.022071  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:16.022459  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:16.522107  437269 type.go:168] "Request Body" body=""
	I1014 19:43:16.522190  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:16.522527  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:17.022255  437269 type.go:168] "Request Body" body=""
	I1014 19:43:17.022335  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:17.022728  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:17.521281  437269 type.go:168] "Request Body" body=""
	I1014 19:43:17.521369  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:17.521726  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:18.021392  437269 type.go:168] "Request Body" body=""
	I1014 19:43:18.021485  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:18.021932  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:18.022012  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:18.521618  437269 type.go:168] "Request Body" body=""
	I1014 19:43:18.521708  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:18.522112  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:19.021718  437269 type.go:168] "Request Body" body=""
	I1014 19:43:19.021829  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:19.022200  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:19.521926  437269 type.go:168] "Request Body" body=""
	I1014 19:43:19.522009  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:19.522391  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:20.021218  437269 type.go:168] "Request Body" body=""
	I1014 19:43:20.021308  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:20.021706  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:20.521306  437269 type.go:168] "Request Body" body=""
	I1014 19:43:20.521386  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:20.521816  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:20.521893  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:21.021342  437269 type.go:168] "Request Body" body=""
	I1014 19:43:21.021427  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:21.021835  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:21.521377  437269 type.go:168] "Request Body" body=""
	I1014 19:43:21.521483  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:21.521876  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:22.021433  437269 type.go:168] "Request Body" body=""
	I1014 19:43:22.021530  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:22.021848  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:22.521448  437269 type.go:168] "Request Body" body=""
	I1014 19:43:22.521550  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:22.521980  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:22.522047  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:23.021566  437269 type.go:168] "Request Body" body=""
	I1014 19:43:23.021671  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:23.022058  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:23.521627  437269 type.go:168] "Request Body" body=""
	I1014 19:43:23.521736  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:23.522126  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:24.022029  437269 type.go:168] "Request Body" body=""
	I1014 19:43:24.022121  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:24.022504  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:24.522205  437269 type.go:168] "Request Body" body=""
	I1014 19:43:24.522294  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:24.522686  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:24.522787  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:25.021717  437269 type.go:168] "Request Body" body=""
	I1014 19:43:25.021820  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:25.022213  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:25.521882  437269 type.go:168] "Request Body" body=""
	I1014 19:43:25.521969  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:25.522345  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:26.021966  437269 type.go:168] "Request Body" body=""
	I1014 19:43:26.022053  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:26.022395  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:26.522078  437269 type.go:168] "Request Body" body=""
	I1014 19:43:26.522167  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:26.522591  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:27.022256  437269 type.go:168] "Request Body" body=""
	I1014 19:43:27.022347  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:27.022787  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:27.022856  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:27.521335  437269 type.go:168] "Request Body" body=""
	I1014 19:43:27.521438  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:27.521885  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:28.021454  437269 type.go:168] "Request Body" body=""
	I1014 19:43:28.021560  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:28.021963  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:28.521548  437269 type.go:168] "Request Body" body=""
	I1014 19:43:28.521631  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:28.522049  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:29.021606  437269 type.go:168] "Request Body" body=""
	I1014 19:43:29.021709  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:29.022129  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:29.521791  437269 type.go:168] "Request Body" body=""
	I1014 19:43:29.521879  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:29.522325  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:29.522390  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:30.022166  437269 type.go:168] "Request Body" body=""
	I1014 19:43:30.022260  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:30.022687  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:30.522272  437269 type.go:168] "Request Body" body=""
	I1014 19:43:30.522355  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:30.522747  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:31.021385  437269 type.go:168] "Request Body" body=""
	I1014 19:43:31.021484  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:31.021909  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:31.521491  437269 type.go:168] "Request Body" body=""
	I1014 19:43:31.521578  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:31.522023  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:32.021606  437269 type.go:168] "Request Body" body=""
	I1014 19:43:32.021692  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:32.022091  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:32.022172  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:32.521661  437269 type.go:168] "Request Body" body=""
	I1014 19:43:32.521740  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:32.522158  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:33.021717  437269 type.go:168] "Request Body" body=""
	I1014 19:43:33.021815  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:33.022209  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:33.521885  437269 type.go:168] "Request Body" body=""
	I1014 19:43:33.521973  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:33.522384  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:34.021211  437269 type.go:168] "Request Body" body=""
	I1014 19:43:34.021293  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:34.021699  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:34.521252  437269 type.go:168] "Request Body" body=""
	I1014 19:43:34.521332  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:34.521740  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:34.521854  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:35.021628  437269 type.go:168] "Request Body" body=""
	I1014 19:43:35.021734  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:35.022103  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:35.521777  437269 type.go:168] "Request Body" body=""
	I1014 19:43:35.521861  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:35.522282  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:36.021901  437269 type.go:168] "Request Body" body=""
	I1014 19:43:36.021991  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:36.022338  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:36.522081  437269 type.go:168] "Request Body" body=""
	I1014 19:43:36.522161  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:36.522532  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:36.522600  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:37.022222  437269 type.go:168] "Request Body" body=""
	I1014 19:43:37.022306  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:37.022680  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:37.521261  437269 type.go:168] "Request Body" body=""
	I1014 19:43:37.521365  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:37.521784  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:38.021342  437269 type.go:168] "Request Body" body=""
	I1014 19:43:38.021427  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:38.021897  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:38.521489  437269 type.go:168] "Request Body" body=""
	I1014 19:43:38.521583  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:38.521930  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:39.021573  437269 type.go:168] "Request Body" body=""
	I1014 19:43:39.021673  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:39.022106  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:39.022190  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:39.521695  437269 type.go:168] "Request Body" body=""
	I1014 19:43:39.521806  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:39.522190  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:40.022070  437269 type.go:168] "Request Body" body=""
	I1014 19:43:40.022155  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:40.022515  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:40.522191  437269 type.go:168] "Request Body" body=""
	I1014 19:43:40.522278  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:40.522665  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:41.021264  437269 type.go:168] "Request Body" body=""
	I1014 19:43:41.021347  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:41.021730  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:41.521285  437269 type.go:168] "Request Body" body=""
	I1014 19:43:41.521368  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:41.521747  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:41.521850  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:42.021332  437269 type.go:168] "Request Body" body=""
	I1014 19:43:42.021413  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:42.021835  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:42.521390  437269 type.go:168] "Request Body" body=""
	I1014 19:43:42.521492  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:42.521872  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:43.021448  437269 type.go:168] "Request Body" body=""
	I1014 19:43:43.021551  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:43.021984  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:43.521527  437269 type.go:168] "Request Body" body=""
	I1014 19:43:43.521610  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:43.521979  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:43.522054  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:44.021891  437269 type.go:168] "Request Body" body=""
	I1014 19:43:44.021982  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:44.022346  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:44.522015  437269 type.go:168] "Request Body" body=""
	I1014 19:43:44.522103  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:44.522480  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:45.021474  437269 type.go:168] "Request Body" body=""
	I1014 19:43:45.021561  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:45.021945  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:45.521543  437269 type.go:168] "Request Body" body=""
	I1014 19:43:45.521646  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:45.522059  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:45.522127  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:46.021638  437269 type.go:168] "Request Body" body=""
	I1014 19:43:46.021729  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:46.022191  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:46.521736  437269 type.go:168] "Request Body" body=""
	I1014 19:43:46.521839  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:46.522226  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:47.021891  437269 type.go:168] "Request Body" body=""
	I1014 19:43:47.021986  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:47.022382  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:47.522067  437269 type.go:168] "Request Body" body=""
	I1014 19:43:47.522151  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:47.522552  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:47.522621  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:48.022193  437269 type.go:168] "Request Body" body=""
	I1014 19:43:48.022285  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:48.022636  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:48.521224  437269 type.go:168] "Request Body" body=""
	I1014 19:43:48.521322  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:48.521716  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:49.021262  437269 type.go:168] "Request Body" body=""
	I1014 19:43:49.021340  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:49.021716  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:49.521334  437269 type.go:168] "Request Body" body=""
	I1014 19:43:49.521413  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:49.521823  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:50.021743  437269 type.go:168] "Request Body" body=""
	I1014 19:43:50.021874  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:50.022283  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:50.022349  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:50.521963  437269 type.go:168] "Request Body" body=""
	I1014 19:43:50.522049  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:50.522461  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:51.022176  437269 type.go:168] "Request Body" body=""
	I1014 19:43:51.022266  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:51.022629  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:51.522282  437269 type.go:168] "Request Body" body=""
	I1014 19:43:51.522383  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:51.522865  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:52.021416  437269 type.go:168] "Request Body" body=""
	I1014 19:43:52.021507  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:52.021884  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:52.521517  437269 type.go:168] "Request Body" body=""
	I1014 19:43:52.521611  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:52.522082  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:52.522155  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:53.021656  437269 type.go:168] "Request Body" body=""
	I1014 19:43:53.021742  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:53.022136  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:53.521806  437269 type.go:168] "Request Body" body=""
	I1014 19:43:53.521891  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:53.522261  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:54.022341  437269 type.go:168] "Request Body" body=""
	I1014 19:43:54.022440  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:54.022890  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:54.521448  437269 type.go:168] "Request Body" body=""
	I1014 19:43:54.521552  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:54.521966  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:55.021854  437269 type.go:168] "Request Body" body=""
	I1014 19:43:55.021934  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:55.022336  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:55.022402  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:55.521987  437269 type.go:168] "Request Body" body=""
	I1014 19:43:55.522071  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:55.522460  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:56.022232  437269 type.go:168] "Request Body" body=""
	I1014 19:43:56.022316  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:56.022653  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:56.521227  437269 type.go:168] "Request Body" body=""
	I1014 19:43:56.521302  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:56.521701  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:57.021269  437269 type.go:168] "Request Body" body=""
	I1014 19:43:57.021349  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:57.021719  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:57.521302  437269 type.go:168] "Request Body" body=""
	I1014 19:43:57.521398  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:57.521838  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:57.521899  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:43:58.021391  437269 type.go:168] "Request Body" body=""
	I1014 19:43:58.021485  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:58.021875  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:58.521454  437269 type.go:168] "Request Body" body=""
	I1014 19:43:58.521550  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:58.521987  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:59.021602  437269 type.go:168] "Request Body" body=""
	I1014 19:43:59.021701  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:59.022089  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:43:59.521704  437269 type.go:168] "Request Body" body=""
	I1014 19:43:59.521805  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:43:59.522205  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:43:59.522272  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:00.022040  437269 type.go:168] "Request Body" body=""
	I1014 19:44:00.022132  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:00.022504  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:00.522200  437269 type.go:168] "Request Body" body=""
	I1014 19:44:00.522297  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:00.522735  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:01.021297  437269 type.go:168] "Request Body" body=""
	I1014 19:44:01.021387  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:01.021784  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:01.521307  437269 type.go:168] "Request Body" body=""
	I1014 19:44:01.521399  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:01.521850  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:02.021406  437269 type.go:168] "Request Body" body=""
	I1014 19:44:02.021500  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:02.021877  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:02.021945  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:02.521436  437269 type.go:168] "Request Body" body=""
	I1014 19:44:02.521539  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:02.521953  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:03.021516  437269 type.go:168] "Request Body" body=""
	I1014 19:44:03.021598  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:03.022005  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:03.521561  437269 type.go:168] "Request Body" body=""
	I1014 19:44:03.521646  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:03.522077  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:04.021994  437269 type.go:168] "Request Body" body=""
	I1014 19:44:04.022079  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:04.022499  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:04.022572  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:04.522163  437269 type.go:168] "Request Body" body=""
	I1014 19:44:04.522255  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:04.522672  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:05.021565  437269 type.go:168] "Request Body" body=""
	I1014 19:44:05.021656  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:05.022053  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:05.521629  437269 type.go:168] "Request Body" body=""
	I1014 19:44:05.521713  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:05.522128  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:06.021689  437269 type.go:168] "Request Body" body=""
	I1014 19:44:06.021801  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:06.022188  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:06.521851  437269 type.go:168] "Request Body" body=""
	I1014 19:44:06.521937  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:06.522347  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:06.522417  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:07.022007  437269 type.go:168] "Request Body" body=""
	I1014 19:44:07.022086  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:07.022436  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:07.522203  437269 type.go:168] "Request Body" body=""
	I1014 19:44:07.522282  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:07.522638  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:08.021309  437269 type.go:168] "Request Body" body=""
	I1014 19:44:08.021397  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:08.021803  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:08.521985  437269 type.go:168] "Request Body" body=""
	I1014 19:44:08.522062  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:08.522422  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:08.522484  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:09.022109  437269 type.go:168] "Request Body" body=""
	I1014 19:44:09.022199  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:09.022550  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:09.522226  437269 type.go:168] "Request Body" body=""
	I1014 19:44:09.522312  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:09.522687  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:10.021566  437269 type.go:168] "Request Body" body=""
	I1014 19:44:10.021708  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:10.022064  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:10.521657  437269 type.go:168] "Request Body" body=""
	I1014 19:44:10.521776  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:10.522143  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:11.021701  437269 type.go:168] "Request Body" body=""
	I1014 19:44:11.021797  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:11.022127  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:11.022194  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:11.521807  437269 type.go:168] "Request Body" body=""
	I1014 19:44:11.521884  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:11.522263  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:12.021962  437269 type.go:168] "Request Body" body=""
	I1014 19:44:12.022049  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:12.022424  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:12.522133  437269 type.go:168] "Request Body" body=""
	I1014 19:44:12.522233  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:12.522615  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:13.022268  437269 type.go:168] "Request Body" body=""
	I1014 19:44:13.022358  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:13.022774  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:13.022845  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:13.521351  437269 type.go:168] "Request Body" body=""
	I1014 19:44:13.521431  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:13.521806  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:14.021818  437269 type.go:168] "Request Body" body=""
	I1014 19:44:14.021912  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:14.022342  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:14.522064  437269 type.go:168] "Request Body" body=""
	I1014 19:44:14.522156  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:14.522518  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:15.021381  437269 type.go:168] "Request Body" body=""
	I1014 19:44:15.021468  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:15.021826  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:15.521382  437269 type.go:168] "Request Body" body=""
	I1014 19:44:15.521487  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:15.521856  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:15.521934  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:16.021382  437269 type.go:168] "Request Body" body=""
	I1014 19:44:16.021472  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:16.021855  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:16.521402  437269 type.go:168] "Request Body" body=""
	I1014 19:44:16.521496  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:16.521958  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:17.021537  437269 type.go:168] "Request Body" body=""
	I1014 19:44:17.021618  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:17.022006  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:17.521572  437269 type.go:168] "Request Body" body=""
	I1014 19:44:17.521652  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:17.522068  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:17.522135  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:18.021636  437269 type.go:168] "Request Body" body=""
	I1014 19:44:18.021735  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:18.022112  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:18.521664  437269 type.go:168] "Request Body" body=""
	I1014 19:44:18.521790  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:18.522173  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:19.021791  437269 type.go:168] "Request Body" body=""
	I1014 19:44:19.021887  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:19.022264  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:19.521890  437269 type.go:168] "Request Body" body=""
	I1014 19:44:19.521989  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:19.522366  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:19.522432  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:20.022234  437269 type.go:168] "Request Body" body=""
	I1014 19:44:20.022313  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:20.022654  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:20.521239  437269 type.go:168] "Request Body" body=""
	I1014 19:44:20.521321  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:20.521737  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:21.021357  437269 type.go:168] "Request Body" body=""
	I1014 19:44:21.021447  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:21.021856  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:21.521454  437269 type.go:168] "Request Body" body=""
	I1014 19:44:21.521555  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:21.521969  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:22.021534  437269 type.go:168] "Request Body" body=""
	I1014 19:44:22.021630  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:22.022029  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:22.022098  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:22.521619  437269 type.go:168] "Request Body" body=""
	I1014 19:44:22.521729  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:22.522128  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:23.021712  437269 type.go:168] "Request Body" body=""
	I1014 19:44:23.021820  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:23.022176  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:23.521802  437269 type.go:168] "Request Body" body=""
	I1014 19:44:23.521885  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:23.522258  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:24.022112  437269 type.go:168] "Request Body" body=""
	I1014 19:44:24.022201  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:24.022532  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:24.022600  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:24.522195  437269 type.go:168] "Request Body" body=""
	I1014 19:44:24.522287  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:24.522634  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:25.021596  437269 type.go:168] "Request Body" body=""
	I1014 19:44:25.021676  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:25.022088  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:25.521654  437269 type.go:168] "Request Body" body=""
	I1014 19:44:25.521741  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:25.522131  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:26.021684  437269 type.go:168] "Request Body" body=""
	I1014 19:44:26.021798  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:26.022168  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:26.521801  437269 type.go:168] "Request Body" body=""
	I1014 19:44:26.521880  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:26.522232  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:26.522299  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:27.021847  437269 type.go:168] "Request Body" body=""
	I1014 19:44:27.021933  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:27.022292  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:27.521878  437269 type.go:168] "Request Body" body=""
	I1014 19:44:27.521963  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:27.522328  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:28.021519  437269 type.go:168] "Request Body" body=""
	I1014 19:44:28.021599  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:28.021968  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:28.521573  437269 type.go:168] "Request Body" body=""
	I1014 19:44:28.521667  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:28.522077  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:29.021709  437269 type.go:168] "Request Body" body=""
	I1014 19:44:29.021839  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:29.022235  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:29.022308  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:29.521910  437269 type.go:168] "Request Body" body=""
	I1014 19:44:29.522006  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:29.522371  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:30.021252  437269 type.go:168] "Request Body" body=""
	I1014 19:44:30.021348  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:30.021744  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:30.521308  437269 type.go:168] "Request Body" body=""
	I1014 19:44:30.521407  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:30.521858  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:31.021447  437269 type.go:168] "Request Body" body=""
	I1014 19:44:31.021537  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:31.021993  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:31.521577  437269 type.go:168] "Request Body" body=""
	I1014 19:44:31.521661  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:31.522091  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:31.522171  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:32.021679  437269 type.go:168] "Request Body" body=""
	I1014 19:44:32.021778  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:32.022180  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:32.521862  437269 type.go:168] "Request Body" body=""
	I1014 19:44:32.521962  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:32.522305  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:33.022031  437269 type.go:168] "Request Body" body=""
	I1014 19:44:33.022124  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:33.022484  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:33.522216  437269 type.go:168] "Request Body" body=""
	I1014 19:44:33.522294  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:33.522643  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:33.522730  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:34.021707  437269 type.go:168] "Request Body" body=""
	I1014 19:44:34.021853  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:34.022332  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:34.522025  437269 type.go:168] "Request Body" body=""
	I1014 19:44:34.522147  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:34.522536  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:35.021511  437269 type.go:168] "Request Body" body=""
	I1014 19:44:35.021620  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:35.022043  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:35.522236  437269 type.go:168] "Request Body" body=""
	I1014 19:44:35.522316  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:35.522681  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:36.021229  437269 type.go:168] "Request Body" body=""
	I1014 19:44:36.021313  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:36.021734  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:36.021830  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:36.521316  437269 type.go:168] "Request Body" body=""
	I1014 19:44:36.521393  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:36.521798  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:37.021352  437269 type.go:168] "Request Body" body=""
	I1014 19:44:37.021434  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:37.021888  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:37.521479  437269 type.go:168] "Request Body" body=""
	I1014 19:44:37.521566  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:37.521949  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:38.021522  437269 type.go:168] "Request Body" body=""
	I1014 19:44:38.021608  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:38.022020  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:38.022085  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:38.521582  437269 type.go:168] "Request Body" body=""
	I1014 19:44:38.521671  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:38.522063  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:39.021622  437269 type.go:168] "Request Body" body=""
	I1014 19:44:39.021702  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:39.022125  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:39.521740  437269 type.go:168] "Request Body" body=""
	I1014 19:44:39.521841  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:39.522231  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:40.022072  437269 type.go:168] "Request Body" body=""
	I1014 19:44:40.022157  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:40.022496  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:40.022560  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:40.522145  437269 type.go:168] "Request Body" body=""
	I1014 19:44:40.522230  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:40.522581  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:41.021191  437269 type.go:168] "Request Body" body=""
	I1014 19:44:41.021271  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:41.021663  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:41.521242  437269 type.go:168] "Request Body" body=""
	I1014 19:44:41.521325  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:41.521677  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:42.021221  437269 type.go:168] "Request Body" body=""
	I1014 19:44:42.021300  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:42.021721  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:42.521295  437269 type.go:168] "Request Body" body=""
	I1014 19:44:42.521377  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:42.521793  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:42.521860  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:43.021377  437269 type.go:168] "Request Body" body=""
	I1014 19:44:43.021470  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:43.021882  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:43.521445  437269 type.go:168] "Request Body" body=""
	I1014 19:44:43.521535  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:43.521905  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:44.021811  437269 type.go:168] "Request Body" body=""
	I1014 19:44:44.021903  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:44.022312  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:44.521977  437269 type.go:168] "Request Body" body=""
	I1014 19:44:44.522062  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:44.522405  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:44.522472  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:45.021229  437269 type.go:168] "Request Body" body=""
	I1014 19:44:45.021316  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:45.021700  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:45.521363  437269 type.go:168] "Request Body" body=""
	I1014 19:44:45.521476  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:45.521862  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:46.021400  437269 type.go:168] "Request Body" body=""
	I1014 19:44:46.021493  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:46.021898  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:46.521589  437269 type.go:168] "Request Body" body=""
	I1014 19:44:46.521682  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:46.522048  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:47.021649  437269 type.go:168] "Request Body" body=""
	I1014 19:44:47.021730  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:47.022119  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:47.022190  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:47.521670  437269 type.go:168] "Request Body" body=""
	I1014 19:44:47.521746  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:47.522086  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:48.021745  437269 type.go:168] "Request Body" body=""
	I1014 19:44:48.021850  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:48.022200  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:48.521828  437269 type.go:168] "Request Body" body=""
	I1014 19:44:48.521908  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:48.522263  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:49.021930  437269 type.go:168] "Request Body" body=""
	I1014 19:44:49.022025  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:49.022391  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:49.022471  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:49.522012  437269 type.go:168] "Request Body" body=""
	I1014 19:44:49.522093  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:49.522436  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:50.021280  437269 type.go:168] "Request Body" body=""
	I1014 19:44:50.021359  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:50.021746  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:50.521299  437269 type.go:168] "Request Body" body=""
	I1014 19:44:50.521381  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:50.521749  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:51.021292  437269 type.go:168] "Request Body" body=""
	I1014 19:44:51.021375  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:51.021830  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:51.521389  437269 type.go:168] "Request Body" body=""
	I1014 19:44:51.521483  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:51.521862  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:51.521938  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:52.021392  437269 type.go:168] "Request Body" body=""
	I1014 19:44:52.021501  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:52.021933  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:52.521524  437269 type.go:168] "Request Body" body=""
	I1014 19:44:52.521606  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:52.522002  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:53.021549  437269 type.go:168] "Request Body" body=""
	I1014 19:44:53.021630  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:53.022033  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:53.521638  437269 type.go:168] "Request Body" body=""
	I1014 19:44:53.521719  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:53.522129  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:53.522202  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:54.022063  437269 type.go:168] "Request Body" body=""
	I1014 19:44:54.022155  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:54.022563  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:54.522249  437269 type.go:168] "Request Body" body=""
	I1014 19:44:54.522346  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:54.522749  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:55.021666  437269 type.go:168] "Request Body" body=""
	I1014 19:44:55.021750  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:55.022126  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:55.521738  437269 type.go:168] "Request Body" body=""
	I1014 19:44:55.521847  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:55.522237  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:55.522304  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:56.021875  437269 type.go:168] "Request Body" body=""
	I1014 19:44:56.021958  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:56.022317  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:56.521953  437269 type.go:168] "Request Body" body=""
	I1014 19:44:56.522031  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:56.522402  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:57.022099  437269 type.go:168] "Request Body" body=""
	I1014 19:44:57.022184  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:57.022571  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:57.522215  437269 type.go:168] "Request Body" body=""
	I1014 19:44:57.522295  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:57.522635  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:44:57.522721  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:44:58.021247  437269 type.go:168] "Request Body" body=""
	I1014 19:44:58.021331  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:58.021778  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:58.521330  437269 type.go:168] "Request Body" body=""
	I1014 19:44:58.521406  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:58.521792  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:59.021307  437269 type.go:168] "Request Body" body=""
	I1014 19:44:59.021390  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:59.021783  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:44:59.521317  437269 type.go:168] "Request Body" body=""
	I1014 19:44:59.521404  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:44:59.521833  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:00.021727  437269 type.go:168] "Request Body" body=""
	I1014 19:45:00.021828  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:00.022220  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:00.022290  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:00.521874  437269 type.go:168] "Request Body" body=""
	I1014 19:45:00.521969  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:00.522342  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:01.022108  437269 type.go:168] "Request Body" body=""
	I1014 19:45:01.022195  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:01.022598  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:01.521221  437269 type.go:168] "Request Body" body=""
	I1014 19:45:01.521312  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:01.521684  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:02.021247  437269 type.go:168] "Request Body" body=""
	I1014 19:45:02.021345  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:02.021741  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:02.521281  437269 type.go:168] "Request Body" body=""
	I1014 19:45:02.521368  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:02.521783  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:02.521850  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:03.021427  437269 type.go:168] "Request Body" body=""
	I1014 19:45:03.021538  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:03.022017  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:03.521576  437269 type.go:168] "Request Body" body=""
	I1014 19:45:03.521665  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:03.522065  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:04.021968  437269 type.go:168] "Request Body" body=""
	I1014 19:45:04.022064  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:04.022412  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:04.522089  437269 type.go:168] "Request Body" body=""
	I1014 19:45:04.522186  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:04.522588  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:04.522669  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:05.021532  437269 type.go:168] "Request Body" body=""
	I1014 19:45:05.021627  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:05.022032  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:05.521660  437269 type.go:168] "Request Body" body=""
	I1014 19:45:05.521743  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:05.522144  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:06.021836  437269 type.go:168] "Request Body" body=""
	I1014 19:45:06.021915  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:06.022313  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:06.522006  437269 type.go:168] "Request Body" body=""
	I1014 19:45:06.522090  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:06.522505  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:07.022194  437269 type.go:168] "Request Body" body=""
	I1014 19:45:07.022282  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:07.022657  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:07.022726  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:07.522255  437269 type.go:168] "Request Body" body=""
	I1014 19:45:07.522341  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:07.522733  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:08.021293  437269 type.go:168] "Request Body" body=""
	I1014 19:45:08.021376  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:08.021784  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:08.521329  437269 type.go:168] "Request Body" body=""
	I1014 19:45:08.521407  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:08.521815  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:09.021335  437269 type.go:168] "Request Body" body=""
	I1014 19:45:09.021426  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:09.021821  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:09.521354  437269 type.go:168] "Request Body" body=""
	I1014 19:45:09.521433  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:09.521870  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:09.521948  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:10.021750  437269 type.go:168] "Request Body" body=""
	I1014 19:45:10.021864  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:10.022248  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:10.521887  437269 type.go:168] "Request Body" body=""
	I1014 19:45:10.521973  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:10.522362  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:11.022015  437269 type.go:168] "Request Body" body=""
	I1014 19:45:11.022096  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:11.022432  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:11.522073  437269 type.go:168] "Request Body" body=""
	I1014 19:45:11.522158  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:11.522547  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:11.522623  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:12.022259  437269 type.go:168] "Request Body" body=""
	I1014 19:45:12.022347  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:12.022850  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:12.521359  437269 type.go:168] "Request Body" body=""
	I1014 19:45:12.521448  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:12.521849  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:13.021409  437269 type.go:168] "Request Body" body=""
	I1014 19:45:13.021494  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:13.021916  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:13.521532  437269 type.go:168] "Request Body" body=""
	I1014 19:45:13.521618  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:13.521981  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:14.021969  437269 type.go:168] "Request Body" body=""
	I1014 19:45:14.022049  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:14.022447  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:14.022510  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:14.522094  437269 type.go:168] "Request Body" body=""
	I1014 19:45:14.522176  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:14.522545  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:15.021509  437269 type.go:168] "Request Body" body=""
	I1014 19:45:15.021606  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:15.022033  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:15.521593  437269 type.go:168] "Request Body" body=""
	I1014 19:45:15.521690  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:15.522096  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:16.021646  437269 type.go:168] "Request Body" body=""
	I1014 19:45:16.021736  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:16.022135  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:16.521804  437269 type.go:168] "Request Body" body=""
	I1014 19:45:16.521890  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:16.522248  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:16.522324  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:17.021975  437269 type.go:168] "Request Body" body=""
	I1014 19:45:17.022056  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:17.022447  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:17.522108  437269 type.go:168] "Request Body" body=""
	I1014 19:45:17.522191  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:17.522594  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:18.022251  437269 type.go:168] "Request Body" body=""
	I1014 19:45:18.022333  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:18.022725  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:18.521289  437269 type.go:168] "Request Body" body=""
	I1014 19:45:18.521376  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:18.521812  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:19.021383  437269 type.go:168] "Request Body" body=""
	I1014 19:45:19.021484  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:19.021904  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:19.021980  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:19.521516  437269 type.go:168] "Request Body" body=""
	I1014 19:45:19.521604  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:19.522056  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:20.021651  437269 type.go:168] "Request Body" body=""
	I1014 19:45:20.021732  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:20.022182  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:20.521732  437269 type.go:168] "Request Body" body=""
	I1014 19:45:20.521838  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:20.522198  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:21.021907  437269 type.go:168] "Request Body" body=""
	I1014 19:45:21.021987  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:21.022351  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:21.022430  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:21.521976  437269 type.go:168] "Request Body" body=""
	I1014 19:45:21.522056  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:21.522417  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:22.022086  437269 type.go:168] "Request Body" body=""
	I1014 19:45:22.022170  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:22.022544  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:22.522193  437269 type.go:168] "Request Body" body=""
	I1014 19:45:22.522282  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:22.522668  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:23.021253  437269 type.go:168] "Request Body" body=""
	I1014 19:45:23.021333  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:23.021784  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:23.521356  437269 type.go:168] "Request Body" body=""
	I1014 19:45:23.521450  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:23.521977  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:23.522059  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:24.021741  437269 type.go:168] "Request Body" body=""
	I1014 19:45:24.021842  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:24.022224  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:24.521890  437269 type.go:168] "Request Body" body=""
	I1014 19:45:24.521984  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:24.522357  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:25.022258  437269 type.go:168] "Request Body" body=""
	I1014 19:45:25.022360  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:25.022739  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:25.521985  437269 type.go:168] "Request Body" body=""
	I1014 19:45:25.522068  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:25.522428  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:25.522491  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:26.022071  437269 type.go:168] "Request Body" body=""
	I1014 19:45:26.022170  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:26.022519  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:26.521198  437269 type.go:168] "Request Body" body=""
	I1014 19:45:26.521288  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:26.521676  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:27.021978  437269 type.go:168] "Request Body" body=""
	I1014 19:45:27.022083  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:27.022419  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:27.522151  437269 type.go:168] "Request Body" body=""
	I1014 19:45:27.522230  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:27.522643  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:27.522714  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:28.021218  437269 type.go:168] "Request Body" body=""
	I1014 19:45:28.021312  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:28.021730  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:28.521312  437269 type.go:168] "Request Body" body=""
	I1014 19:45:28.521403  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:28.521840  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:29.021354  437269 type.go:168] "Request Body" body=""
	I1014 19:45:29.021435  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:29.021854  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:29.521378  437269 type.go:168] "Request Body" body=""
	I1014 19:45:29.521458  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:29.521850  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:30.021662  437269 type.go:168] "Request Body" body=""
	I1014 19:45:30.021789  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:30.022146  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:30.022213  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:30.521738  437269 type.go:168] "Request Body" body=""
	I1014 19:45:30.521833  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:30.522211  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:31.021880  437269 type.go:168] "Request Body" body=""
	I1014 19:45:31.021993  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:31.022332  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:31.522123  437269 type.go:168] "Request Body" body=""
	I1014 19:45:31.522204  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:31.522575  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:32.022205  437269 type.go:168] "Request Body" body=""
	I1014 19:45:32.022295  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:32.022647  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:32.022725  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:32.521198  437269 type.go:168] "Request Body" body=""
	I1014 19:45:32.521290  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:32.521668  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:33.021206  437269 type.go:168] "Request Body" body=""
	I1014 19:45:33.021284  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:33.021669  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:33.521252  437269 type.go:168] "Request Body" body=""
	I1014 19:45:33.521335  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:33.521732  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:34.021648  437269 type.go:168] "Request Body" body=""
	I1014 19:45:34.021738  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:34.022124  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:34.521677  437269 type.go:168] "Request Body" body=""
	I1014 19:45:34.521786  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:34.522167  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:34.522228  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:35.021984  437269 type.go:168] "Request Body" body=""
	I1014 19:45:35.022074  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:35.022422  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:35.522074  437269 type.go:168] "Request Body" body=""
	I1014 19:45:35.522161  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:35.522560  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:36.022246  437269 type.go:168] "Request Body" body=""
	I1014 19:45:36.022332  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:36.022735  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:36.521326  437269 type.go:168] "Request Body" body=""
	I1014 19:45:36.521412  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:36.521843  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:37.021388  437269 type.go:168] "Request Body" body=""
	I1014 19:45:37.021485  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:37.021891  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:37.021957  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:37.521503  437269 type.go:168] "Request Body" body=""
	I1014 19:45:37.521585  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:37.522005  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:38.021579  437269 type.go:168] "Request Body" body=""
	I1014 19:45:38.021679  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:38.022059  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:38.521663  437269 type.go:168] "Request Body" body=""
	I1014 19:45:38.521751  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:38.522160  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:39.021909  437269 type.go:168] "Request Body" body=""
	I1014 19:45:39.021996  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:39.022378  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:39.022449  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:39.522030  437269 type.go:168] "Request Body" body=""
	I1014 19:45:39.522107  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:39.522416  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:40.021388  437269 type.go:168] "Request Body" body=""
	I1014 19:45:40.021481  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:40.021844  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:40.521422  437269 type.go:168] "Request Body" body=""
	I1014 19:45:40.521523  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:40.521966  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:41.021564  437269 type.go:168] "Request Body" body=""
	I1014 19:45:41.021641  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:41.022031  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:41.521648  437269 type.go:168] "Request Body" body=""
	I1014 19:45:41.521734  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:41.522167  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:41.522236  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:42.021731  437269 type.go:168] "Request Body" body=""
	I1014 19:45:42.021836  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:42.022192  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:42.521731  437269 type.go:168] "Request Body" body=""
	I1014 19:45:42.521839  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:42.522217  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:43.021906  437269 type.go:168] "Request Body" body=""
	I1014 19:45:43.021993  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:43.022331  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:43.522111  437269 type.go:168] "Request Body" body=""
	I1014 19:45:43.522198  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:43.522589  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:43.522675  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:44.021291  437269 type.go:168] "Request Body" body=""
	I1014 19:45:44.021372  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:44.021800  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:44.521363  437269 type.go:168] "Request Body" body=""
	I1014 19:45:44.521449  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:44.521869  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:45.021752  437269 type.go:168] "Request Body" body=""
	I1014 19:45:45.021850  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:45.022233  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:45.521855  437269 type.go:168] "Request Body" body=""
	I1014 19:45:45.521941  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:45.522316  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:46.022006  437269 type.go:168] "Request Body" body=""
	I1014 19:45:46.022095  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:46.022499  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:46.022579  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:46.522210  437269 type.go:168] "Request Body" body=""
	I1014 19:45:46.522318  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:46.522722  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:47.021283  437269 type.go:168] "Request Body" body=""
	I1014 19:45:47.021385  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:47.021781  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:47.521429  437269 type.go:168] "Request Body" body=""
	I1014 19:45:47.521536  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:47.521995  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:48.021575  437269 type.go:168] "Request Body" body=""
	I1014 19:45:48.021686  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:48.022099  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:48.521787  437269 type.go:168] "Request Body" body=""
	I1014 19:45:48.521871  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:48.522261  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:48.522369  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:49.021944  437269 type.go:168] "Request Body" body=""
	I1014 19:45:49.022027  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:49.022513  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:49.522168  437269 type.go:168] "Request Body" body=""
	I1014 19:45:49.522247  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:49.522598  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:50.021501  437269 type.go:168] "Request Body" body=""
	I1014 19:45:50.021615  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:50.022004  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:50.521581  437269 type.go:168] "Request Body" body=""
	I1014 19:45:50.521669  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:50.522045  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:51.021656  437269 type.go:168] "Request Body" body=""
	I1014 19:45:51.021788  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:51.022144  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:51.022212  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:51.521847  437269 type.go:168] "Request Body" body=""
	I1014 19:45:51.521925  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:51.522299  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:52.022088  437269 type.go:168] "Request Body" body=""
	I1014 19:45:52.022197  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:52.022587  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:52.522247  437269 type.go:168] "Request Body" body=""
	I1014 19:45:52.522330  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:52.522658  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:53.021334  437269 type.go:168] "Request Body" body=""
	I1014 19:45:53.021438  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:53.021860  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:53.521371  437269 type.go:168] "Request Body" body=""
	I1014 19:45:53.521458  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:53.521812  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:53.521887  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:54.021737  437269 type.go:168] "Request Body" body=""
	I1014 19:45:54.021853  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:54.022236  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:54.521871  437269 type.go:168] "Request Body" body=""
	I1014 19:45:54.521952  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:54.522300  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:55.022188  437269 type.go:168] "Request Body" body=""
	I1014 19:45:55.022267  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:55.022698  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:55.521299  437269 type.go:168] "Request Body" body=""
	I1014 19:45:55.521387  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:55.521745  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:56.021324  437269 type.go:168] "Request Body" body=""
	I1014 19:45:56.021405  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:56.021853  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:56.021933  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:56.521381  437269 type.go:168] "Request Body" body=""
	I1014 19:45:56.521492  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:56.521856  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:57.021449  437269 type.go:168] "Request Body" body=""
	I1014 19:45:57.021569  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:57.022053  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:57.521631  437269 type.go:168] "Request Body" body=""
	I1014 19:45:57.521711  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:57.522096  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:58.021695  437269 type.go:168] "Request Body" body=""
	I1014 19:45:58.021812  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:58.022220  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:45:58.022300  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:45:58.521874  437269 type.go:168] "Request Body" body=""
	I1014 19:45:58.521965  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:58.522333  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:59.021991  437269 type.go:168] "Request Body" body=""
	I1014 19:45:59.022083  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:59.022475  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:45:59.522167  437269 type.go:168] "Request Body" body=""
	I1014 19:45:59.522245  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:45:59.522597  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:00.021599  437269 type.go:168] "Request Body" body=""
	I1014 19:46:00.021701  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:00.022127  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:00.521743  437269 type.go:168] "Request Body" body=""
	I1014 19:46:00.521861  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:00.522238  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:46:00.522338  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:46:01.022015  437269 type.go:168] "Request Body" body=""
	I1014 19:46:01.022109  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:01.022496  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:01.522199  437269 type.go:168] "Request Body" body=""
	I1014 19:46:01.522284  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:01.522792  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:02.021313  437269 type.go:168] "Request Body" body=""
	I1014 19:46:02.021414  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:02.021802  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:02.521355  437269 type.go:168] "Request Body" body=""
	I1014 19:46:02.521435  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:02.521837  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:03.021400  437269 type.go:168] "Request Body" body=""
	I1014 19:46:03.021512  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:03.021843  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:46:03.021936  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:46:03.521495  437269 type.go:168] "Request Body" body=""
	I1014 19:46:03.521638  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:03.522055  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:04.022126  437269 type.go:168] "Request Body" body=""
	I1014 19:46:04.022216  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:04.022594  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:04.522216  437269 type.go:168] "Request Body" body=""
	I1014 19:46:04.522303  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:04.522679  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:05.021591  437269 type.go:168] "Request Body" body=""
	I1014 19:46:05.021704  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:05.022095  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:46:05.022161  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:46:05.521689  437269 type.go:168] "Request Body" body=""
	I1014 19:46:05.521808  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:05.522192  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:06.021790  437269 type.go:168] "Request Body" body=""
	I1014 19:46:06.021897  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:06.022280  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:06.521951  437269 type.go:168] "Request Body" body=""
	I1014 19:46:06.522040  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:06.522397  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:07.022069  437269 type.go:168] "Request Body" body=""
	I1014 19:46:07.022173  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:07.022542  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:46:07.022606  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:46:07.522218  437269 type.go:168] "Request Body" body=""
	I1014 19:46:07.522298  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:07.522637  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:08.021220  437269 type.go:168] "Request Body" body=""
	I1014 19:46:08.021314  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:08.021696  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:08.521279  437269 type.go:168] "Request Body" body=""
	I1014 19:46:08.521359  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:08.521778  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:09.021343  437269 type.go:168] "Request Body" body=""
	I1014 19:46:09.021451  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:09.021866  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:09.521382  437269 type.go:168] "Request Body" body=""
	I1014 19:46:09.521459  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:09.521838  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:46:09.521913  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:46:10.021664  437269 type.go:168] "Request Body" body=""
	I1014 19:46:10.021744  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:10.022128  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:10.521668  437269 type.go:168] "Request Body" body=""
	I1014 19:46:10.521745  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:10.522134  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:11.021709  437269 type.go:168] "Request Body" body=""
	I1014 19:46:11.021817  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:11.022226  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:11.521863  437269 type.go:168] "Request Body" body=""
	I1014 19:46:11.521950  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:11.522316  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:46:11.522391  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:46:12.022004  437269 type.go:168] "Request Body" body=""
	I1014 19:46:12.022083  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:12.022466  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:12.522152  437269 type.go:168] "Request Body" body=""
	I1014 19:46:12.522231  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:12.522572  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:13.022208  437269 type.go:168] "Request Body" body=""
	I1014 19:46:13.022306  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:13.022686  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:13.521212  437269 type.go:168] "Request Body" body=""
	I1014 19:46:13.521286  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:13.521620  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:14.021358  437269 type.go:168] "Request Body" body=""
	I1014 19:46:14.021443  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:14.021869  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:46:14.021948  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:46:14.521427  437269 type.go:168] "Request Body" body=""
	I1014 19:46:14.521526  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:14.521830  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:15.021689  437269 type.go:168] "Request Body" body=""
	I1014 19:46:15.021842  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:15.022202  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:15.521922  437269 type.go:168] "Request Body" body=""
	I1014 19:46:15.522020  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:15.522429  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:16.022119  437269 type.go:168] "Request Body" body=""
	I1014 19:46:16.022199  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:16.022517  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:46:16.022586  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:46:16.521207  437269 type.go:168] "Request Body" body=""
	I1014 19:46:16.521315  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:16.521711  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:17.021272  437269 type.go:168] "Request Body" body=""
	I1014 19:46:17.021355  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:17.021723  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:17.521289  437269 type.go:168] "Request Body" body=""
	I1014 19:46:17.521390  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:17.521811  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:18.021359  437269 type.go:168] "Request Body" body=""
	I1014 19:46:18.021443  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:18.021849  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:18.521429  437269 type.go:168] "Request Body" body=""
	I1014 19:46:18.521529  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:18.521905  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:46:18.521988  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:46:19.021521  437269 type.go:168] "Request Body" body=""
	I1014 19:46:19.021615  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:19.022010  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:19.521715  437269 type.go:168] "Request Body" body=""
	I1014 19:46:19.521866  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:19.522297  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:20.022176  437269 type.go:168] "Request Body" body=""
	I1014 19:46:20.022258  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:20.022646  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:20.522243  437269 type.go:168] "Request Body" body=""
	I1014 19:46:20.522333  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:20.522713  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:46:20.522805  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:46:21.021280  437269 type.go:168] "Request Body" body=""
	I1014 19:46:21.021386  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:21.021805  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:21.521347  437269 type.go:168] "Request Body" body=""
	I1014 19:46:21.521438  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:21.521811  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:22.021364  437269 type.go:168] "Request Body" body=""
	I1014 19:46:22.021456  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:22.021861  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:22.521399  437269 type.go:168] "Request Body" body=""
	I1014 19:46:22.521520  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:22.521917  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:23.021531  437269 type.go:168] "Request Body" body=""
	I1014 19:46:23.021637  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:23.022036  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:46:23.022100  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:46:23.521619  437269 type.go:168] "Request Body" body=""
	I1014 19:46:23.521711  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:23.522062  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:24.021884  437269 type.go:168] "Request Body" body=""
	I1014 19:46:24.021977  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:24.022350  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:24.522011  437269 type.go:168] "Request Body" body=""
	I1014 19:46:24.522097  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:24.522508  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:25.021512  437269 type.go:168] "Request Body" body=""
	I1014 19:46:25.021596  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:25.022033  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:25.521632  437269 type.go:168] "Request Body" body=""
	I1014 19:46:25.521726  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:25.522148  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:46:25.522244  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:46:26.021740  437269 type.go:168] "Request Body" body=""
	I1014 19:46:26.021850  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:26.022219  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:26.521873  437269 type.go:168] "Request Body" body=""
	I1014 19:46:26.521956  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:26.522372  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:27.022036  437269 type.go:168] "Request Body" body=""
	I1014 19:46:27.022129  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:27.022489  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:27.522188  437269 type.go:168] "Request Body" body=""
	I1014 19:46:27.522279  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:27.522655  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:46:27.522745  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:46:28.021236  437269 type.go:168] "Request Body" body=""
	I1014 19:46:28.021317  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:28.021676  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:28.521949  437269 type.go:168] "Request Body" body=""
	I1014 19:46:28.522027  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:28.522409  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:29.022101  437269 type.go:168] "Request Body" body=""
	I1014 19:46:29.022190  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:29.022539  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:29.522171  437269 type.go:168] "Request Body" body=""
	I1014 19:46:29.522256  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:29.522639  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:30.021643  437269 type.go:168] "Request Body" body=""
	I1014 19:46:30.021778  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:30.022144  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:46:30.022208  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:46:30.521811  437269 type.go:168] "Request Body" body=""
	I1014 19:46:30.521894  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:30.522289  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:31.022066  437269 type.go:168] "Request Body" body=""
	I1014 19:46:31.022164  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:31.022558  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:31.522208  437269 type.go:168] "Request Body" body=""
	I1014 19:46:31.522295  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:31.522719  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:32.021314  437269 type.go:168] "Request Body" body=""
	I1014 19:46:32.021414  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:32.021832  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:32.521364  437269 type.go:168] "Request Body" body=""
	I1014 19:46:32.521461  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:32.521854  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1014 19:46:32.521920  437269 node_ready.go:55] error getting node "functional-744288" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-744288": dial tcp 192.168.49.2:8441: connect: connection refused
	I1014 19:46:33.021401  437269 type.go:168] "Request Body" body=""
	I1014 19:46:33.021513  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:33.022010  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:33.521545  437269 type.go:168] "Request Body" body=""
	I1014 19:46:33.521653  437269 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-744288" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1014 19:46:33.522075  437269 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1014 19:46:34.021736  437269 type.go:168] "Request Body" body=""
	I1014 19:46:34.022027  437269 node_ready.go:38] duration metric: took 6m0.00093705s for node "functional-744288" to be "Ready" ...
	I1014 19:46:34.025220  437269 out.go:203] 
	W1014 19:46:34.026860  437269 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1014 19:46:34.026878  437269 out.go:285] * 
	W1014 19:46:34.028574  437269 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 19:46:34.030019  437269 out.go:203] 
	
	
	==> CRI-O <==
	Oct 14 19:46:44 functional-744288 crio[2959]: time="2025-10-14T19:46:44.416602802Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=c15e9887-9828-442c-b32f-b9922d8e40ac name=/runtime.v1.ImageService/ImageStatus
	Oct 14 19:46:44 functional-744288 crio[2959]: time="2025-10-14T19:46:44.729591696Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=b62d124f-6584-4711-88c1-0b165828185a name=/runtime.v1.ImageService/ImageStatus
	Oct 14 19:46:44 functional-744288 crio[2959]: time="2025-10-14T19:46:44.729772767Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=b62d124f-6584-4711-88c1-0b165828185a name=/runtime.v1.ImageService/ImageStatus
	Oct 14 19:46:44 functional-744288 crio[2959]: time="2025-10-14T19:46:44.729820387Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=b62d124f-6584-4711-88c1-0b165828185a name=/runtime.v1.ImageService/ImageStatus
	Oct 14 19:46:45 functional-744288 crio[2959]: time="2025-10-14T19:46:45.307714844Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=053772f9-08c0-4525-84ce-7a6d7953be6c name=/runtime.v1.ImageService/ImageStatus
	Oct 14 19:46:45 functional-744288 crio[2959]: time="2025-10-14T19:46:45.307875021Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=053772f9-08c0-4525-84ce-7a6d7953be6c name=/runtime.v1.ImageService/ImageStatus
	Oct 14 19:46:45 functional-744288 crio[2959]: time="2025-10-14T19:46:45.307909073Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=053772f9-08c0-4525-84ce-7a6d7953be6c name=/runtime.v1.ImageService/ImageStatus
	Oct 14 19:46:45 functional-744288 crio[2959]: time="2025-10-14T19:46:45.335249563Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=c9faeb66-8800-434d-93d6-9b537b9fb0f1 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 19:46:45 functional-744288 crio[2959]: time="2025-10-14T19:46:45.335403796Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=c9faeb66-8800-434d-93d6-9b537b9fb0f1 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 19:46:45 functional-744288 crio[2959]: time="2025-10-14T19:46:45.33544857Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=c9faeb66-8800-434d-93d6-9b537b9fb0f1 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 19:46:45 functional-744288 crio[2959]: time="2025-10-14T19:46:45.35998157Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=f91576cd-e278-41c5-a76f-8db57bc77203 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 19:46:45 functional-744288 crio[2959]: time="2025-10-14T19:46:45.360127865Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=f91576cd-e278-41c5-a76f-8db57bc77203 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 19:46:45 functional-744288 crio[2959]: time="2025-10-14T19:46:45.360206948Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=f91576cd-e278-41c5-a76f-8db57bc77203 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 19:46:45 functional-744288 crio[2959]: time="2025-10-14T19:46:45.830799939Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=453726c1-6f81-486c-90fa-d6a5f8819591 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 19:46:45 functional-744288 crio[2959]: time="2025-10-14T19:46:45.837311462Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=87012645-7107-490b-870d-45e35f2ed8d5 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 19:46:45 functional-744288 crio[2959]: time="2025-10-14T19:46:45.838266924Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=0fdb39c3-2596-44ea-be9f-d601f941db0b name=/runtime.v1.ImageService/ImageStatus
	Oct 14 19:46:45 functional-744288 crio[2959]: time="2025-10-14T19:46:45.839296292Z" level=info msg="Creating container: kube-system/kube-apiserver-functional-744288/kube-apiserver" id=529f00b9-a507-4375-94c1-f6f8ef86c2c5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:46:45 functional-744288 crio[2959]: time="2025-10-14T19:46:45.839521325Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 19:46:45 functional-744288 crio[2959]: time="2025-10-14T19:46:45.842995952Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 19:46:45 functional-744288 crio[2959]: time="2025-10-14T19:46:45.843406794Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 19:46:45 functional-744288 crio[2959]: time="2025-10-14T19:46:45.861346546Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=529f00b9-a507-4375-94c1-f6f8ef86c2c5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:46:45 functional-744288 crio[2959]: time="2025-10-14T19:46:45.862775557Z" level=info msg="createCtr: deleting container ID dad5edfe79e46b3e27de965fa552932dff803925c49e7b849ee52bdcdc897a09 from idIndex" id=529f00b9-a507-4375-94c1-f6f8ef86c2c5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:46:45 functional-744288 crio[2959]: time="2025-10-14T19:46:45.862817514Z" level=info msg="createCtr: removing container dad5edfe79e46b3e27de965fa552932dff803925c49e7b849ee52bdcdc897a09" id=529f00b9-a507-4375-94c1-f6f8ef86c2c5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:46:45 functional-744288 crio[2959]: time="2025-10-14T19:46:45.862859434Z" level=info msg="createCtr: deleting container dad5edfe79e46b3e27de965fa552932dff803925c49e7b849ee52bdcdc897a09 from storage" id=529f00b9-a507-4375-94c1-f6f8ef86c2c5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:46:45 functional-744288 crio[2959]: time="2025-10-14T19:46:45.864956682Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-functional-744288_kube-system_7dacb23619ff0889511bcb2e81339e77_0" id=529f00b9-a507-4375-94c1-f6f8ef86c2c5 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:46:49.534586    5457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:46:49.535243    5457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:46:49.537005    5457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:46:49.537678    5457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:46:49.538882    5457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 91 42 a8 46 f5 08 06
	[ +33.952077] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff aa 17 60 38 7c 63 08 06
	[  +0.961658] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ba 77 81 98 05 a1 08 06
	[  +0.043334] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 8a 2f 57 55 4c d7 08 06
	[  +4.681081] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f6 ac 51 a4 b2 5a 08 06
	[Oct14 19:04] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e6 d1 de e1 85 25 08 06
	[  +0.899784] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ae f2 fe 51 3d 95 08 06
	[  +0.040749] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 82 c6 36 89 b8 4a 08 06
	[  +4.162603] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c6 62 d5 5c 0e ef 08 06
	[ +30.801371] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 06 ae e5 28 c7 39 08 06
	[  +0.817897] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ae 5e 5e 90 62 1a 08 06
	[  +0.046959] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 96 8f 1b bd b9 9f 08 06
	[Oct14 19:05] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 56 59 c3 7c db c4 08 06
	
	
	==> kernel <==
	 19:46:49 up  2:29,  0 user,  load average: 0.17, 0.08, 2.24
	Linux functional-744288 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 14 19:46:38 functional-744288 kubelet[1809]:         container kube-controller-manager start failed in pod kube-controller-manager-functional-744288_kube-system(b1fd55382fcf5a735f17d7c6c4ddad91): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 14 19:46:38 functional-744288 kubelet[1809]:  > logger="UnhandledError"
	Oct 14 19:46:38 functional-744288 kubelet[1809]: E1014 19:46:38.878336    1809 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-functional-744288" podUID="b1fd55382fcf5a735f17d7c6c4ddad91"
	Oct 14 19:46:41 functional-744288 kubelet[1809]: E1014 19:46:41.836910    1809 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-744288\" not found" node="functional-744288"
	Oct 14 19:46:41 functional-744288 kubelet[1809]: E1014 19:46:41.865256    1809 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 14 19:46:41 functional-744288 kubelet[1809]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 14 19:46:41 functional-744288 kubelet[1809]:  > podSandboxID="de75312ccca355aabaabb18a5eb1e6d7a7e4d5b3fb088ce1c5eb28a39d567355"
	Oct 14 19:46:41 functional-744288 kubelet[1809]: E1014 19:46:41.865384    1809 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 14 19:46:41 functional-744288 kubelet[1809]:         container etcd start failed in pod etcd-functional-744288_kube-system(07f65d41bdafe0b0f1a2009eadad0a38): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 14 19:46:41 functional-744288 kubelet[1809]:  > logger="UnhandledError"
	Oct 14 19:46:41 functional-744288 kubelet[1809]: E1014 19:46:41.865426    1809 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-functional-744288" podUID="07f65d41bdafe0b0f1a2009eadad0a38"
	Oct 14 19:46:42 functional-744288 kubelet[1809]: E1014 19:46:42.518626    1809 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-744288?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Oct 14 19:46:42 functional-744288 kubelet[1809]: I1014 19:46:42.743900    1809 kubelet_node_status.go:75] "Attempting to register node" node="functional-744288"
	Oct 14 19:46:42 functional-744288 kubelet[1809]: E1014 19:46:42.744338    1809 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-744288"
	Oct 14 19:46:45 functional-744288 kubelet[1809]: E1014 19:46:45.836842    1809 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-744288\" not found" node="functional-744288"
	Oct 14 19:46:45 functional-744288 kubelet[1809]: E1014 19:46:45.865300    1809 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 14 19:46:45 functional-744288 kubelet[1809]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 14 19:46:45 functional-744288 kubelet[1809]:  > podSandboxID="d501fdff2b92902ecd1a22b235a50d225f771b04701776d8a1bb0e78b9481d1c"
	Oct 14 19:46:45 functional-744288 kubelet[1809]: E1014 19:46:45.865414    1809 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 14 19:46:45 functional-744288 kubelet[1809]:         container kube-apiserver start failed in pod kube-apiserver-functional-744288_kube-system(7dacb23619ff0889511bcb2e81339e77): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 14 19:46:45 functional-744288 kubelet[1809]:  > logger="UnhandledError"
	Oct 14 19:46:45 functional-744288 kubelet[1809]: E1014 19:46:45.865451    1809 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-functional-744288" podUID="7dacb23619ff0889511bcb2e81339e77"
	Oct 14 19:46:47 functional-744288 kubelet[1809]: E1014 19:46:47.102630    1809 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://192.168.49.2:8441/api/v1/namespaces/default/events/functional-744288.186e72ac19058e88\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-744288.186e72ac19058e88  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-744288,UID:functional-744288,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-744288 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-744288,},FirstTimestamp:2025-10-14 19:36:27.828178568 +0000 UTC m=+0.685163688,LastTimestamp:2025-10-14 19:36:27.829543993 +0000 UTC m=+0.686529115,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,Reporting
Instance:functional-744288,}"
	Oct 14 19:46:47 functional-744288 kubelet[1809]: E1014 19:46:47.885708    1809 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-744288\" not found"
	Oct 14 19:46:49 functional-744288 kubelet[1809]: E1014 19:46:49.520042    1809 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-744288?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-744288 -n functional-744288
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-744288 -n functional-744288: exit status 2 (348.240805ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-744288" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (2.30s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (737.02s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-744288 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-744288 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 80 (12m15.073265809s)

                                                
                                                
-- stdout --
	* [functional-744288] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21409
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21409-413763/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-413763/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "functional-744288" primary control-plane node in "functional-744288" cluster
	* Pulling base image v0.0.48-1759745255-21703 ...
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	  - apiserver.enable-admission-plugins=NamespaceAutoProvision
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.568458ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000296304s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000399838s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000393905s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001430917s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000444633s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000481701s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000675884s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001430917s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000444633s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000481701s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000675884s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	* 

                                                
                                                
** /stderr **
functional_test.go:774: failed to restart minikube. args "out/minikube-linux-amd64 start -p functional-744288 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 80
functional_test.go:776: restart took 12m15.077693791s for "functional-744288" cluster.
I1014 19:59:05.532350  417373 config.go:182] Loaded profile config "functional-744288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/serial/ExtraConfig]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/serial/ExtraConfig]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-744288
helpers_test.go:243: (dbg) docker inspect functional-744288:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ca55a7aa3ac1a055c5c96669019f558795a313505e680734901ed4465642a23a",
	        "Created": "2025-10-14T19:32:11.700856501Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 432350,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-14T19:32:11.736879267Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/ca55a7aa3ac1a055c5c96669019f558795a313505e680734901ed4465642a23a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ca55a7aa3ac1a055c5c96669019f558795a313505e680734901ed4465642a23a/hostname",
	        "HostsPath": "/var/lib/docker/containers/ca55a7aa3ac1a055c5c96669019f558795a313505e680734901ed4465642a23a/hosts",
	        "LogPath": "/var/lib/docker/containers/ca55a7aa3ac1a055c5c96669019f558795a313505e680734901ed4465642a23a/ca55a7aa3ac1a055c5c96669019f558795a313505e680734901ed4465642a23a-json.log",
	        "Name": "/functional-744288",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-744288:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-744288",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ca55a7aa3ac1a055c5c96669019f558795a313505e680734901ed4465642a23a",
	                "LowerDir": "/var/lib/docker/overlay2/7474e2653fad8f96d0b2edb2947dba52f432e8f6324d88224718eafd377e5e8b-init/diff:/var/lib/docker/overlay2/51cca15559205c73df9571f03495ed8a29f085405673af5fef2c1ba7c695d8af/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7474e2653fad8f96d0b2edb2947dba52f432e8f6324d88224718eafd377e5e8b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7474e2653fad8f96d0b2edb2947dba52f432e8f6324d88224718eafd377e5e8b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7474e2653fad8f96d0b2edb2947dba52f432e8f6324d88224718eafd377e5e8b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-744288",
	                "Source": "/var/lib/docker/volumes/functional-744288/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-744288",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-744288",
	                "name.minikube.sigs.k8s.io": "functional-744288",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "697bcb3f8caffcc21274484944886a08e42751728789c447339536a0c178eab6",
	            "SandboxKey": "/var/run/docker/netns/697bcb3f8caf",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32898"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32899"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32902"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32900"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32901"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-744288": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b6:cd:04:2d:45:3c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a03a168f8787c5e4cc68b4fb2601645260a265eddce7ef101976f25cd32bdabb",
	                    "EndpointID": "c7c49faacc3b14ce4dd3f84ad70993167a6a8b1490739ae12d7ca1980d176ee7",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-744288",
	                        "ca55a7aa3ac1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-744288 -n functional-744288
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-744288 -n functional-744288: exit status 2 (323.514713ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctional/serial/ExtraConfig FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/serial/ExtraConfig]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-744288 logs -n 25
helpers_test.go:260: TestFunctional/serial/ExtraConfig logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                     ARGS                                                      │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ unpause │ nospam-442016 --log_dir /tmp/nospam-442016 unpause                                                            │ nospam-442016     │ jenkins │ v1.37.0 │ 14 Oct 25 19:31 UTC │ 14 Oct 25 19:31 UTC │
	│ unpause │ nospam-442016 --log_dir /tmp/nospam-442016 unpause                                                            │ nospam-442016     │ jenkins │ v1.37.0 │ 14 Oct 25 19:31 UTC │ 14 Oct 25 19:31 UTC │
	│ unpause │ nospam-442016 --log_dir /tmp/nospam-442016 unpause                                                            │ nospam-442016     │ jenkins │ v1.37.0 │ 14 Oct 25 19:31 UTC │ 14 Oct 25 19:32 UTC │
	│ stop    │ nospam-442016 --log_dir /tmp/nospam-442016 stop                                                               │ nospam-442016     │ jenkins │ v1.37.0 │ 14 Oct 25 19:32 UTC │ 14 Oct 25 19:32 UTC │
	│ stop    │ nospam-442016 --log_dir /tmp/nospam-442016 stop                                                               │ nospam-442016     │ jenkins │ v1.37.0 │ 14 Oct 25 19:32 UTC │ 14 Oct 25 19:32 UTC │
	│ stop    │ nospam-442016 --log_dir /tmp/nospam-442016 stop                                                               │ nospam-442016     │ jenkins │ v1.37.0 │ 14 Oct 25 19:32 UTC │ 14 Oct 25 19:32 UTC │
	│ delete  │ -p nospam-442016                                                                                              │ nospam-442016     │ jenkins │ v1.37.0 │ 14 Oct 25 19:32 UTC │ 14 Oct 25 19:32 UTC │
	│ start   │ -p functional-744288 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:32 UTC │                     │
	│ start   │ -p functional-744288 --alsologtostderr -v=8                                                                   │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:40 UTC │                     │
	│ cache   │ functional-744288 cache add registry.k8s.io/pause:3.1                                                         │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:46 UTC │ 14 Oct 25 19:46 UTC │
	│ cache   │ functional-744288 cache add registry.k8s.io/pause:3.3                                                         │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:46 UTC │ 14 Oct 25 19:46 UTC │
	│ cache   │ functional-744288 cache add registry.k8s.io/pause:latest                                                      │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:46 UTC │ 14 Oct 25 19:46 UTC │
	│ cache   │ functional-744288 cache add minikube-local-cache-test:functional-744288                                       │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:46 UTC │ 14 Oct 25 19:46 UTC │
	│ cache   │ functional-744288 cache delete minikube-local-cache-test:functional-744288                                    │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:46 UTC │ 14 Oct 25 19:46 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                              │ minikube          │ jenkins │ v1.37.0 │ 14 Oct 25 19:46 UTC │ 14 Oct 25 19:46 UTC │
	│ cache   │ list                                                                                                          │ minikube          │ jenkins │ v1.37.0 │ 14 Oct 25 19:46 UTC │ 14 Oct 25 19:46 UTC │
	│ ssh     │ functional-744288 ssh sudo crictl images                                                                      │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:46 UTC │ 14 Oct 25 19:46 UTC │
	│ ssh     │ functional-744288 ssh sudo crictl rmi registry.k8s.io/pause:latest                                            │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:46 UTC │ 14 Oct 25 19:46 UTC │
	│ ssh     │ functional-744288 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                       │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:46 UTC │                     │
	│ cache   │ functional-744288 cache reload                                                                                │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:46 UTC │ 14 Oct 25 19:46 UTC │
	│ ssh     │ functional-744288 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                       │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:46 UTC │ 14 Oct 25 19:46 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                              │ minikube          │ jenkins │ v1.37.0 │ 14 Oct 25 19:46 UTC │ 14 Oct 25 19:46 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                           │ minikube          │ jenkins │ v1.37.0 │ 14 Oct 25 19:46 UTC │ 14 Oct 25 19:46 UTC │
	│ kubectl │ functional-744288 kubectl -- --context functional-744288 get pods                                             │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:46 UTC │                     │
	│ start   │ -p functional-744288 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all      │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:46 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/14 19:46:50
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1014 19:46:50.499742  443658 out.go:360] Setting OutFile to fd 1 ...
	I1014 19:46:50.500016  443658 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 19:46:50.500020  443658 out.go:374] Setting ErrFile to fd 2...
	I1014 19:46:50.500023  443658 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 19:46:50.500243  443658 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-413763/.minikube/bin
	I1014 19:46:50.500711  443658 out.go:368] Setting JSON to false
	I1014 19:46:50.501776  443658 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":8957,"bootTime":1760462254,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1014 19:46:50.501876  443658 start.go:141] virtualization: kvm guest
	I1014 19:46:50.504465  443658 out.go:179] * [functional-744288] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1014 19:46:50.505861  443658 out.go:179]   - MINIKUBE_LOCATION=21409
	I1014 19:46:50.505882  443658 notify.go:220] Checking for updates...
	I1014 19:46:50.508327  443658 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 19:46:50.509750  443658 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-413763/kubeconfig
	I1014 19:46:50.510866  443658 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-413763/.minikube
	I1014 19:46:50.511854  443658 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1014 19:46:50.512854  443658 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 19:46:50.514315  443658 config.go:182] Loaded profile config "functional-744288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 19:46:50.514426  443658 driver.go:421] Setting default libvirt URI to qemu:///system
	I1014 19:46:50.538310  443658 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1014 19:46:50.538445  443658 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 19:46:50.601114  443658 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:false NGoroutines:58 SystemTime:2025-10-14 19:46:50.588718622 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1014 19:46:50.601209  443658 docker.go:318] overlay module found
	I1014 19:46:50.603086  443658 out.go:179] * Using the docker driver based on existing profile
	I1014 19:46:50.604379  443658 start.go:305] selected driver: docker
	I1014 19:46:50.604388  443658 start.go:925] validating driver "docker" against &{Name:functional-744288 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-744288 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 19:46:50.604469  443658 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 19:46:50.604549  443658 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 19:46:50.666156  443658 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:false NGoroutines:58 SystemTime:2025-10-14 19:46:50.655387801 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1014 19:46:50.666705  443658 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 19:46:50.666723  443658 cni.go:84] Creating CNI manager for ""
	I1014 19:46:50.666779  443658 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1014 19:46:50.666824  443658 start.go:349] cluster config:
	{Name:functional-744288 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-744288 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 19:46:50.668890  443658 out.go:179] * Starting "functional-744288" primary control-plane node in "functional-744288" cluster
	I1014 19:46:50.670269  443658 cache.go:123] Beginning downloading kic base image for docker with crio
	I1014 19:46:50.671700  443658 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1014 19:46:50.672853  443658 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 19:46:50.672887  443658 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-413763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1014 19:46:50.672894  443658 cache.go:58] Caching tarball of preloaded images
	I1014 19:46:50.672978  443658 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1014 19:46:50.672993  443658 preload.go:233] Found /home/jenkins/minikube-integration/21409-413763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1014 19:46:50.673002  443658 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1014 19:46:50.673099  443658 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/config.json ...
	I1014 19:46:50.694236  443658 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1014 19:46:50.694247  443658 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1014 19:46:50.694262  443658 cache.go:232] Successfully downloaded all kic artifacts
	I1014 19:46:50.694285  443658 start.go:360] acquireMachinesLock for functional-744288: {Name:mk27c3a9a4edec1c99a109c410361619ff35ec14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 19:46:50.694339  443658 start.go:364] duration metric: took 40.961µs to acquireMachinesLock for "functional-744288"
	I1014 19:46:50.694355  443658 start.go:96] Skipping create...Using existing machine configuration
	I1014 19:46:50.694359  443658 fix.go:54] fixHost starting: 
	I1014 19:46:50.694551  443658 cli_runner.go:164] Run: docker container inspect functional-744288 --format={{.State.Status}}
	I1014 19:46:50.713829  443658 fix.go:112] recreateIfNeeded on functional-744288: state=Running err=<nil>
	W1014 19:46:50.713852  443658 fix.go:138] unexpected machine state, will restart: <nil>
	I1014 19:46:50.716011  443658 out.go:252] * Updating the running docker "functional-744288" container ...
	I1014 19:46:50.716063  443658 machine.go:93] provisionDockerMachine start ...
	I1014 19:46:50.716145  443658 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-744288
	I1014 19:46:50.734693  443658 main.go:141] libmachine: Using SSH client type: native
	I1014 19:46:50.734948  443658 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32898 <nil> <nil>}
	I1014 19:46:50.734956  443658 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 19:46:50.881904  443658 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-744288
	
	I1014 19:46:50.881928  443658 ubuntu.go:182] provisioning hostname "functional-744288"
	I1014 19:46:50.882024  443658 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-744288
	I1014 19:46:50.900923  443658 main.go:141] libmachine: Using SSH client type: native
	I1014 19:46:50.901187  443658 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32898 <nil> <nil>}
	I1014 19:46:50.901202  443658 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-744288 && echo "functional-744288" | sudo tee /etc/hostname
	I1014 19:46:51.056989  443658 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-744288
	
	I1014 19:46:51.057085  443658 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-744288
	I1014 19:46:51.074806  443658 main.go:141] libmachine: Using SSH client type: native
	I1014 19:46:51.075019  443658 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32898 <nil> <nil>}
	I1014 19:46:51.075030  443658 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-744288' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-744288/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-744288' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 19:46:51.221854  443658 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 19:46:51.221878  443658 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-413763/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-413763/.minikube}
	I1014 19:46:51.221910  443658 ubuntu.go:190] setting up certificates
	I1014 19:46:51.221952  443658 provision.go:84] configureAuth start
	I1014 19:46:51.222015  443658 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-744288
	I1014 19:46:51.240005  443658 provision.go:143] copyHostCerts
	I1014 19:46:51.240069  443658 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem, removing ...
	I1014 19:46:51.240090  443658 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem
	I1014 19:46:51.240177  443658 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem (1078 bytes)
	I1014 19:46:51.240322  443658 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem, removing ...
	I1014 19:46:51.240330  443658 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem
	I1014 19:46:51.240371  443658 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem (1123 bytes)
	I1014 19:46:51.240443  443658 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem, removing ...
	I1014 19:46:51.240447  443658 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem
	I1014 19:46:51.240478  443658 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem (1675 bytes)
	I1014 19:46:51.240545  443658 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca-key.pem org=jenkins.functional-744288 san=[127.0.0.1 192.168.49.2 functional-744288 localhost minikube]
	I1014 19:46:51.277418  443658 provision.go:177] copyRemoteCerts
	I1014 19:46:51.277469  443658 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 19:46:51.277512  443658 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-744288
	I1014 19:46:51.295935  443658 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/functional-744288/id_rsa Username:docker}
	I1014 19:46:51.399940  443658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 19:46:51.419014  443658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1014 19:46:51.436411  443658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1014 19:46:51.453971  443658 provision.go:87] duration metric: took 232.002826ms to configureAuth
	I1014 19:46:51.453999  443658 ubuntu.go:206] setting minikube options for container-runtime
	I1014 19:46:51.454155  443658 config.go:182] Loaded profile config "functional-744288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 19:46:51.454253  443658 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-744288
	I1014 19:46:51.471667  443658 main.go:141] libmachine: Using SSH client type: native
	I1014 19:46:51.471917  443658 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32898 <nil> <nil>}
	I1014 19:46:51.471928  443658 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 19:46:51.753714  443658 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 19:46:51.753736  443658 machine.go:96] duration metric: took 1.037666418s to provisionDockerMachine
	I1014 19:46:51.753750  443658 start.go:293] postStartSetup for "functional-744288" (driver="docker")
	I1014 19:46:51.753791  443658 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 19:46:51.753870  443658 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 19:46:51.753924  443658 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-744288
	I1014 19:46:51.771894  443658 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/functional-744288/id_rsa Username:docker}
	I1014 19:46:51.875275  443658 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 19:46:51.879014  443658 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1014 19:46:51.879036  443658 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1014 19:46:51.879053  443658 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-413763/.minikube/addons for local assets ...
	I1014 19:46:51.879110  443658 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-413763/.minikube/files for local assets ...
	I1014 19:46:51.879192  443658 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem -> 4173732.pem in /etc/ssl/certs
	I1014 19:46:51.879264  443658 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/test/nested/copy/417373/hosts -> hosts in /etc/test/nested/copy/417373
	I1014 19:46:51.879295  443658 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/417373
	I1014 19:46:51.887031  443658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem --> /etc/ssl/certs/4173732.pem (1708 bytes)
	I1014 19:46:51.905744  443658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/test/nested/copy/417373/hosts --> /etc/test/nested/copy/417373/hosts (40 bytes)
	I1014 19:46:51.923826  443658 start.go:296] duration metric: took 170.03666ms for postStartSetup
	I1014 19:46:51.923911  443658 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1014 19:46:51.923959  443658 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-744288
	I1014 19:46:51.942362  443658 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/functional-744288/id_rsa Username:docker}
	I1014 19:46:52.043778  443658 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1014 19:46:52.048837  443658 fix.go:56] duration metric: took 1.354467438s for fixHost
	I1014 19:46:52.048860  443658 start.go:83] releasing machines lock for "functional-744288", held for 1.354513179s
	I1014 19:46:52.048940  443658 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-744288
	I1014 19:46:52.067069  443658 ssh_runner.go:195] Run: cat /version.json
	I1014 19:46:52.067102  443658 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 19:46:52.067120  443658 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-744288
	I1014 19:46:52.067171  443658 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-744288
	I1014 19:46:52.086721  443658 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/functional-744288/id_rsa Username:docker}
	I1014 19:46:52.087447  443658 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/functional-744288/id_rsa Username:docker}
	I1014 19:46:52.242329  443658 ssh_runner.go:195] Run: systemctl --version
	I1014 19:46:52.249118  443658 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 19:46:52.286245  443658 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 19:46:52.291299  443658 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 19:46:52.291349  443658 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 19:46:52.300635  443658 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1014 19:46:52.300652  443658 start.go:495] detecting cgroup driver to use...
	I1014 19:46:52.300686  443658 detect.go:190] detected "systemd" cgroup driver on host os
	I1014 19:46:52.300736  443658 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 19:46:52.316275  443658 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 19:46:52.329801  443658 docker.go:218] disabling cri-docker service (if available) ...
	I1014 19:46:52.329853  443658 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 19:46:52.346243  443658 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 19:46:52.359490  443658 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 19:46:52.447197  443658 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 19:46:52.538861  443658 docker.go:234] disabling docker service ...
	I1014 19:46:52.538916  443658 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 19:46:52.553930  443658 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 19:46:52.567369  443658 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 19:46:52.660956  443658 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 19:46:52.750890  443658 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 19:46:52.763838  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 19:46:52.778079  443658 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1014 19:46:52.778155  443658 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 19:46:52.787486  443658 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1014 19:46:52.787547  443658 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 19:46:52.796683  443658 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 19:46:52.805576  443658 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 19:46:52.814550  443658 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 19:46:52.822996  443658 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 19:46:52.831895  443658 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 19:46:52.840774  443658 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 19:46:52.850651  443658 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 19:46:52.859313  443658 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 19:46:52.867538  443658 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 19:46:52.962127  443658 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 19:46:53.076386  443658 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 19:46:53.076443  443658 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 19:46:53.080594  443658 start.go:563] Will wait 60s for crictl version
	I1014 19:46:53.080668  443658 ssh_runner.go:195] Run: which crictl
	I1014 19:46:53.084304  443658 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1014 19:46:53.109208  443658 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1014 19:46:53.109281  443658 ssh_runner.go:195] Run: crio --version
	I1014 19:46:53.138035  443658 ssh_runner.go:195] Run: crio --version
	I1014 19:46:53.168844  443658 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1014 19:46:53.170307  443658 cli_runner.go:164] Run: docker network inspect functional-744288 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1014 19:46:53.187885  443658 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1014 19:46:53.194070  443658 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1014 19:46:53.195672  443658 kubeadm.go:883] updating cluster {Name:functional-744288 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-744288 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 19:46:53.195871  443658 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 19:46:53.195945  443658 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 19:46:53.228563  443658 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 19:46:53.228574  443658 crio.go:433] Images already preloaded, skipping extraction
	I1014 19:46:53.228622  443658 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 19:46:53.254361  443658 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 19:46:53.254375  443658 cache_images.go:85] Images are preloaded, skipping loading
	I1014 19:46:53.254381  443658 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1014 19:46:53.254470  443658 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-744288 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-744288 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 19:46:53.254527  443658 ssh_runner.go:195] Run: crio config
	I1014 19:46:53.300404  443658 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1014 19:46:53.300426  443658 cni.go:84] Creating CNI manager for ""
	I1014 19:46:53.300433  443658 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1014 19:46:53.300444  443658 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1014 19:46:53.300495  443658 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-744288 NodeName:functional-744288 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map
[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1014 19:46:53.300616  443658 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-744288"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 19:46:53.300679  443658 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1014 19:46:53.309514  443658 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 19:46:53.309583  443658 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1014 19:46:53.317487  443658 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1014 19:46:53.330167  443658 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 19:46:53.343013  443658 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2063 bytes)
	I1014 19:46:53.355344  443658 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1014 19:46:53.359037  443658 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 19:46:53.444644  443658 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 19:46:53.458036  443658 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288 for IP: 192.168.49.2
	I1014 19:46:53.458048  443658 certs.go:195] generating shared ca certs ...
	I1014 19:46:53.458069  443658 certs.go:227] acquiring lock for ca certs: {Name:mk332cc83f1e1747534fc81e896bed94f24941d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 19:46:53.458227  443658 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.key
	I1014 19:46:53.458260  443658 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.key
	I1014 19:46:53.458267  443658 certs.go:257] generating profile certs ...
	I1014 19:46:53.458335  443658 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/client.key
	I1014 19:46:53.458371  443658 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/apiserver.key.d065d9e2
	I1014 19:46:53.458404  443658 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/proxy-client.key
	I1014 19:46:53.458496  443658 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/417373.pem (1338 bytes)
	W1014 19:46:53.458520  443658 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-413763/.minikube/certs/417373_empty.pem, impossibly tiny 0 bytes
	I1014 19:46:53.458525  443658 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca-key.pem (1675 bytes)
	I1014 19:46:53.458546  443658 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem (1078 bytes)
	I1014 19:46:53.458563  443658 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem (1123 bytes)
	I1014 19:46:53.458578  443658 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem (1675 bytes)
	I1014 19:46:53.458610  443658 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem (1708 bytes)
	I1014 19:46:53.459307  443658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 19:46:53.477414  443658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1014 19:46:53.495270  443658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 19:46:53.512555  443658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1014 19:46:53.529773  443658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1014 19:46:53.546789  443658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1014 19:46:53.564254  443658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 19:46:53.581817  443658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1014 19:46:53.599895  443658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 19:46:53.617446  443658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/certs/417373.pem --> /usr/share/ca-certificates/417373.pem (1338 bytes)
	I1014 19:46:53.635253  443658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem --> /usr/share/ca-certificates/4173732.pem (1708 bytes)
	I1014 19:46:53.652640  443658 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 19:46:53.665679  443658 ssh_runner.go:195] Run: openssl version
	I1014 19:46:53.672008  443658 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 19:46:53.680614  443658 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 19:46:53.684470  443658 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 19:15 /usr/share/ca-certificates/minikubeCA.pem
	I1014 19:46:53.684516  443658 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 19:46:53.719901  443658 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 19:46:53.728850  443658 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/417373.pem && ln -fs /usr/share/ca-certificates/417373.pem /etc/ssl/certs/417373.pem"
	I1014 19:46:53.737556  443658 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/417373.pem
	I1014 19:46:53.741417  443658 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 19:32 /usr/share/ca-certificates/417373.pem
	I1014 19:46:53.741461  443658 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/417373.pem
	I1014 19:46:53.776307  443658 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/417373.pem /etc/ssl/certs/51391683.0"
	I1014 19:46:53.785236  443658 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4173732.pem && ln -fs /usr/share/ca-certificates/4173732.pem /etc/ssl/certs/4173732.pem"
	I1014 19:46:53.794084  443658 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4173732.pem
	I1014 19:46:53.797892  443658 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 19:32 /usr/share/ca-certificates/4173732.pem
	I1014 19:46:53.797948  443658 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4173732.pem
	I1014 19:46:53.834593  443658 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4173732.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 19:46:53.844414  443658 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 19:46:53.848749  443658 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1014 19:46:53.887194  443658 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1014 19:46:53.922606  443658 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1014 19:46:53.957478  443658 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1014 19:46:53.992284  443658 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1014 19:46:54.027831  443658 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1014 19:46:54.062500  443658 kubeadm.go:400] StartCluster: {Name:functional-744288 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-744288 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 19:46:54.062581  443658 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 19:46:54.062679  443658 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 19:46:54.091036  443658 cri.go:89] found id: ""
	I1014 19:46:54.091100  443658 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 19:46:54.099853  443658 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1014 19:46:54.099866  443658 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1014 19:46:54.099936  443658 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1014 19:46:54.108263  443658 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1014 19:46:54.108959  443658 kubeconfig.go:125] found "functional-744288" server: "https://192.168.49.2:8441"
	I1014 19:46:54.110744  443658 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1014 19:46:54.119142  443658 kubeadm.go:644] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-10-14 19:32:19.540090301 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-10-14 19:46:53.353553179 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1014 19:46:54.119152  443658 kubeadm.go:1160] stopping kube-system containers ...
	I1014 19:46:54.119166  443658 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1014 19:46:54.119218  443658 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 19:46:54.148301  443658 cri.go:89] found id: ""
	I1014 19:46:54.148360  443658 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1014 19:46:54.184714  443658 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 19:46:54.193363  443658 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5631 Oct 14 19:36 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5640 Oct 14 19:36 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5676 Oct 14 19:36 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Oct 14 19:36 /etc/kubernetes/scheduler.conf
	
	I1014 19:46:54.193426  443658 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1014 19:46:54.201562  443658 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1014 19:46:54.209606  443658 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1014 19:46:54.209663  443658 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 19:46:54.217395  443658 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1014 19:46:54.225064  443658 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1014 19:46:54.225124  443658 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 19:46:54.232906  443658 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1014 19:46:54.240872  443658 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1014 19:46:54.240946  443658 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 19:46:54.249061  443658 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 19:46:54.257286  443658 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 19:46:54.300108  443658 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 19:46:55.343385  443658 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.043246412s)
	I1014 19:46:55.343447  443658 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1014 19:46:55.525076  443658 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 19:46:55.576109  443658 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1014 19:46:55.627520  443658 api_server.go:52] waiting for apiserver process to appear ...
	I1014 19:46:55.627605  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:46:56.127985  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:46:56.627838  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:46:57.127896  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:46:57.627665  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:46:58.127984  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:46:58.627867  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:46:59.127900  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:46:59.628123  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:00.128625  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:00.627821  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:01.128624  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:01.628023  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:02.127948  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:02.627921  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:03.127948  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:03.628734  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:04.128392  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:04.628537  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:05.128064  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:05.628802  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:06.128694  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:06.628003  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:07.128400  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:07.628401  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:08.127838  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:08.628730  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:09.128120  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:09.628353  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:10.128434  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:10.628596  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:11.128581  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:11.627793  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:12.127961  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:12.628351  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:13.128116  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:13.627994  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:14.128426  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:14.628582  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:15.127702  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:15.628620  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:16.128507  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:16.628503  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:17.128107  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:17.628228  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:18.128362  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:18.628356  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:19.127920  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:19.628163  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:20.128061  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:20.628781  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:21.127881  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:21.628577  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:22.128659  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:22.628134  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:23.128128  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:23.627880  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:24.128119  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:24.627778  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:25.127863  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:25.628390  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:26.127929  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:26.627912  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:27.128042  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:27.628342  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:28.128494  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:28.628349  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:29.128156  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:29.628040  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:30.127990  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:30.627843  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:31.128015  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:31.627940  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:32.127940  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:32.628112  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:33.127960  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:33.627881  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:34.128093  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:34.628548  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:35.128447  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:35.628084  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:36.128068  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:36.628232  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:37.127674  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:37.627888  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:38.127934  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:38.627918  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:39.127805  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:39.628511  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:40.127885  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:40.628201  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:41.128746  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:41.627723  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:42.127816  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:42.628553  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:43.128336  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:43.628428  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:44.128606  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:44.628579  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:45.128728  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:45.628365  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:46.127990  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:46.628044  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:47.127727  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:47.628173  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:48.128160  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:48.627943  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:49.128276  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:49.628454  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:50.127829  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:50.628280  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:51.127982  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:51.628287  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:52.128593  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:52.627776  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:53.127784  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:53.628593  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:54.127690  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:54.627941  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:55.128160  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:55.628161  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:47:55.628261  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:47:55.656679  443658 cri.go:89] found id: ""
	I1014 19:47:55.656706  443658 logs.go:282] 0 containers: []
	W1014 19:47:55.656717  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:47:55.656725  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:47:55.656807  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:47:55.684574  443658 cri.go:89] found id: ""
	I1014 19:47:55.684594  443658 logs.go:282] 0 containers: []
	W1014 19:47:55.684602  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:47:55.684607  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:47:55.684669  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:47:55.711291  443658 cri.go:89] found id: ""
	I1014 19:47:55.711309  443658 logs.go:282] 0 containers: []
	W1014 19:47:55.711316  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:47:55.711321  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:47:55.711376  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:47:55.738652  443658 cri.go:89] found id: ""
	I1014 19:47:55.738669  443658 logs.go:282] 0 containers: []
	W1014 19:47:55.738678  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:47:55.738690  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:47:55.738752  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:47:55.765191  443658 cri.go:89] found id: ""
	I1014 19:47:55.765208  443658 logs.go:282] 0 containers: []
	W1014 19:47:55.765215  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:47:55.765220  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:47:55.765267  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:47:55.791406  443658 cri.go:89] found id: ""
	I1014 19:47:55.791425  443658 logs.go:282] 0 containers: []
	W1014 19:47:55.791433  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:47:55.791438  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:47:55.791483  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:47:55.817705  443658 cri.go:89] found id: ""
	I1014 19:47:55.817724  443658 logs.go:282] 0 containers: []
	W1014 19:47:55.817732  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:47:55.817741  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:47:55.817787  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:47:55.885166  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:47:55.885191  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:47:55.903388  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:47:55.903408  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:47:55.962011  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:47:55.955051    6730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:47:55.955898    6730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:47:55.957465    6730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:47:55.957907    6730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:47:55.958999    6730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:47:55.955051    6730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:47:55.955898    6730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:47:55.957465    6730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:47:55.957907    6730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:47:55.958999    6730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:47:55.962024  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:47:55.962036  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:47:56.023614  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:47:56.023639  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:47:58.556015  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:58.567258  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:47:58.567330  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:47:58.593588  443658 cri.go:89] found id: ""
	I1014 19:47:58.593606  443658 logs.go:282] 0 containers: []
	W1014 19:47:58.593613  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:47:58.593618  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:47:58.593686  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:47:58.621667  443658 cri.go:89] found id: ""
	I1014 19:47:58.621687  443658 logs.go:282] 0 containers: []
	W1014 19:47:58.621694  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:47:58.621699  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:47:58.621753  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:47:58.648823  443658 cri.go:89] found id: ""
	I1014 19:47:58.648841  443658 logs.go:282] 0 containers: []
	W1014 19:47:58.648851  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:47:58.648858  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:47:58.648920  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:47:58.675986  443658 cri.go:89] found id: ""
	I1014 19:47:58.676007  443658 logs.go:282] 0 containers: []
	W1014 19:47:58.676017  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:47:58.676024  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:47:58.676074  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:47:58.703476  443658 cri.go:89] found id: ""
	I1014 19:47:58.703492  443658 logs.go:282] 0 containers: []
	W1014 19:47:58.703499  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:47:58.703504  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:47:58.703553  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:47:58.732093  443658 cri.go:89] found id: ""
	I1014 19:47:58.732116  443658 logs.go:282] 0 containers: []
	W1014 19:47:58.732127  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:47:58.732133  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:47:58.732188  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:47:58.759813  443658 cri.go:89] found id: ""
	I1014 19:47:58.759832  443658 logs.go:282] 0 containers: []
	W1014 19:47:58.759839  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:47:58.759848  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:47:58.759858  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:47:58.829913  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:47:58.829936  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:47:58.848245  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:47:58.848269  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:47:58.907295  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:47:58.900510    6852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:47:58.901027    6852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:47:58.902546    6852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:47:58.903012    6852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:47:58.904214    6852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:47:58.900510    6852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:47:58.901027    6852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:47:58.902546    6852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:47:58.903012    6852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:47:58.904214    6852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:47:58.907316  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:47:58.907329  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:47:58.971553  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:47:58.971576  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:48:01.502989  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:48:01.514422  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:48:01.514481  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:48:01.541083  443658 cri.go:89] found id: ""
	I1014 19:48:01.541099  443658 logs.go:282] 0 containers: []
	W1014 19:48:01.541107  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:48:01.541113  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:48:01.541166  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:48:01.568411  443658 cri.go:89] found id: ""
	I1014 19:48:01.568430  443658 logs.go:282] 0 containers: []
	W1014 19:48:01.568438  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:48:01.568443  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:48:01.568507  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:48:01.596626  443658 cri.go:89] found id: ""
	I1014 19:48:01.596643  443658 logs.go:282] 0 containers: []
	W1014 19:48:01.596651  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:48:01.596656  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:48:01.596709  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:48:01.625098  443658 cri.go:89] found id: ""
	I1014 19:48:01.625114  443658 logs.go:282] 0 containers: []
	W1014 19:48:01.625121  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:48:01.625126  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:48:01.625175  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:48:01.652267  443658 cri.go:89] found id: ""
	I1014 19:48:01.652287  443658 logs.go:282] 0 containers: []
	W1014 19:48:01.652296  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:48:01.652302  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:48:01.652369  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:48:01.680110  443658 cri.go:89] found id: ""
	I1014 19:48:01.680126  443658 logs.go:282] 0 containers: []
	W1014 19:48:01.680132  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:48:01.680137  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:48:01.680183  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:48:01.706650  443658 cri.go:89] found id: ""
	I1014 19:48:01.706673  443658 logs.go:282] 0 containers: []
	W1014 19:48:01.706682  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:48:01.706692  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:48:01.706703  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:48:01.777579  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:48:01.777603  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:48:01.796141  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:48:01.796160  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:48:01.854657  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:48:01.848022    6974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:01.848515    6974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:01.850053    6974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:01.850582    6974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:01.851657    6974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:48:01.848022    6974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:01.848515    6974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:01.850053    6974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:01.850582    6974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:01.851657    6974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:48:01.854673  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:48:01.854688  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:48:01.921567  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:48:01.921605  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:48:04.454355  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:48:04.465748  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:48:04.465834  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:48:04.493735  443658 cri.go:89] found id: ""
	I1014 19:48:04.493752  443658 logs.go:282] 0 containers: []
	W1014 19:48:04.493773  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:48:04.493780  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:48:04.493837  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:48:04.520295  443658 cri.go:89] found id: ""
	I1014 19:48:04.520313  443658 logs.go:282] 0 containers: []
	W1014 19:48:04.520321  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:48:04.520325  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:48:04.520380  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:48:04.547856  443658 cri.go:89] found id: ""
	I1014 19:48:04.547880  443658 logs.go:282] 0 containers: []
	W1014 19:48:04.547891  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:48:04.547898  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:48:04.547963  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:48:04.574029  443658 cri.go:89] found id: ""
	I1014 19:48:04.574047  443658 logs.go:282] 0 containers: []
	W1014 19:48:04.574055  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:48:04.574059  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:48:04.574111  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:48:04.600612  443658 cri.go:89] found id: ""
	I1014 19:48:04.600635  443658 logs.go:282] 0 containers: []
	W1014 19:48:04.600643  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:48:04.600648  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:48:04.600710  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:48:04.627768  443658 cri.go:89] found id: ""
	I1014 19:48:04.627787  443658 logs.go:282] 0 containers: []
	W1014 19:48:04.627796  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:48:04.627803  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:48:04.627868  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:48:04.654609  443658 cri.go:89] found id: ""
	I1014 19:48:04.654626  443658 logs.go:282] 0 containers: []
	W1014 19:48:04.654633  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:48:04.654641  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:48:04.654666  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:48:04.723997  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:48:04.724022  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:48:04.742117  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:48:04.742138  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:48:04.800762  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:48:04.793052    7104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:04.793685    7104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:04.795214    7104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:04.795736    7104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:04.797328    7104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:48:04.793052    7104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:04.793685    7104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:04.795214    7104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:04.795736    7104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:04.797328    7104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:48:04.800782  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:48:04.800797  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:48:04.865079  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:48:04.865104  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:48:07.397466  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:48:07.409124  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:48:07.409189  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:48:07.436009  443658 cri.go:89] found id: ""
	I1014 19:48:07.436030  443658 logs.go:282] 0 containers: []
	W1014 19:48:07.436039  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:48:07.436045  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:48:07.436092  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:48:07.463450  443658 cri.go:89] found id: ""
	I1014 19:48:07.463467  443658 logs.go:282] 0 containers: []
	W1014 19:48:07.463474  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:48:07.463479  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:48:07.463538  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:48:07.489350  443658 cri.go:89] found id: ""
	I1014 19:48:07.489367  443658 logs.go:282] 0 containers: []
	W1014 19:48:07.489373  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:48:07.489379  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:48:07.489423  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:48:07.516187  443658 cri.go:89] found id: ""
	I1014 19:48:07.516205  443658 logs.go:282] 0 containers: []
	W1014 19:48:07.516212  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:48:07.516217  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:48:07.516266  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:48:07.544147  443658 cri.go:89] found id: ""
	I1014 19:48:07.544163  443658 logs.go:282] 0 containers: []
	W1014 19:48:07.544171  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:48:07.544178  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:48:07.544232  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:48:07.570956  443658 cri.go:89] found id: ""
	I1014 19:48:07.570987  443658 logs.go:282] 0 containers: []
	W1014 19:48:07.570997  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:48:07.571004  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:48:07.571055  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:48:07.599057  443658 cri.go:89] found id: ""
	I1014 19:48:07.599075  443658 logs.go:282] 0 containers: []
	W1014 19:48:07.599083  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:48:07.599091  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:48:07.599102  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:48:07.629352  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:48:07.629386  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:48:07.696795  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:48:07.696819  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:48:07.714841  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:48:07.714863  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:48:07.773003  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:48:07.765637    7253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:07.766223    7253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:07.767815    7253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:07.768258    7253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:07.769624    7253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:48:07.765637    7253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:07.766223    7253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:07.767815    7253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:07.768258    7253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:07.769624    7253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:48:07.773022  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:48:07.773036  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:48:10.338910  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:48:10.350323  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:48:10.350379  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:48:10.377858  443658 cri.go:89] found id: ""
	I1014 19:48:10.377875  443658 logs.go:282] 0 containers: []
	W1014 19:48:10.377882  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:48:10.377886  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:48:10.377938  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:48:10.404249  443658 cri.go:89] found id: ""
	I1014 19:48:10.404265  443658 logs.go:282] 0 containers: []
	W1014 19:48:10.404272  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:48:10.404277  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:48:10.404326  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:48:10.432298  443658 cri.go:89] found id: ""
	I1014 19:48:10.432315  443658 logs.go:282] 0 containers: []
	W1014 19:48:10.432322  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:48:10.432328  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:48:10.432377  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:48:10.458476  443658 cri.go:89] found id: ""
	I1014 19:48:10.458495  443658 logs.go:282] 0 containers: []
	W1014 19:48:10.458501  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:48:10.458507  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:48:10.458552  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:48:10.486998  443658 cri.go:89] found id: ""
	I1014 19:48:10.487017  443658 logs.go:282] 0 containers: []
	W1014 19:48:10.487024  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:48:10.487029  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:48:10.487075  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:48:10.514207  443658 cri.go:89] found id: ""
	I1014 19:48:10.514223  443658 logs.go:282] 0 containers: []
	W1014 19:48:10.514230  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:48:10.514235  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:48:10.514285  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:48:10.541589  443658 cri.go:89] found id: ""
	I1014 19:48:10.541604  443658 logs.go:282] 0 containers: []
	W1014 19:48:10.541610  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:48:10.541618  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:48:10.541630  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:48:10.608114  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:48:10.608140  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:48:10.627515  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:48:10.627537  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:48:10.687776  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:48:10.680118    7357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:10.680631    7357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:10.682237    7357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:10.682859    7357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:10.684410    7357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:48:10.680118    7357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:10.680631    7357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:10.682237    7357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:10.682859    7357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:10.684410    7357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:48:10.687790  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:48:10.687805  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:48:10.752090  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:48:10.752115  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:48:13.282895  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:48:13.294310  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:48:13.294364  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:48:13.321971  443658 cri.go:89] found id: ""
	I1014 19:48:13.321990  443658 logs.go:282] 0 containers: []
	W1014 19:48:13.321999  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:48:13.322005  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:48:13.322054  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:48:13.349696  443658 cri.go:89] found id: ""
	I1014 19:48:13.349717  443658 logs.go:282] 0 containers: []
	W1014 19:48:13.349727  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:48:13.349734  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:48:13.349809  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:48:13.375640  443658 cri.go:89] found id: ""
	I1014 19:48:13.375658  443658 logs.go:282] 0 containers: []
	W1014 19:48:13.375664  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:48:13.375669  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:48:13.375723  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:48:13.401774  443658 cri.go:89] found id: ""
	I1014 19:48:13.401795  443658 logs.go:282] 0 containers: []
	W1014 19:48:13.401805  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:48:13.401810  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:48:13.401857  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:48:13.428959  443658 cri.go:89] found id: ""
	I1014 19:48:13.428976  443658 logs.go:282] 0 containers: []
	W1014 19:48:13.428983  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:48:13.428988  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:48:13.429047  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:48:13.457247  443658 cri.go:89] found id: ""
	I1014 19:48:13.457264  443658 logs.go:282] 0 containers: []
	W1014 19:48:13.457271  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:48:13.457276  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:48:13.457324  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:48:13.483816  443658 cri.go:89] found id: ""
	I1014 19:48:13.483834  443658 logs.go:282] 0 containers: []
	W1014 19:48:13.483841  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:48:13.483849  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:48:13.483860  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:48:13.551788  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:48:13.551811  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:48:13.569457  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:48:13.569478  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:48:13.627267  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:48:13.619783    7474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:13.620394    7474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:13.621969    7474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:13.622387    7474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:13.623926    7474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:48:13.619783    7474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:13.620394    7474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:13.621969    7474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:13.622387    7474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:13.623926    7474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:48:13.627279  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:48:13.627289  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:48:13.691177  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:48:13.691201  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:48:16.221827  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:48:16.233209  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:48:16.233277  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:48:16.259929  443658 cri.go:89] found id: ""
	I1014 19:48:16.259948  443658 logs.go:282] 0 containers: []
	W1014 19:48:16.259959  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:48:16.259966  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:48:16.260018  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:48:16.287292  443658 cri.go:89] found id: ""
	I1014 19:48:16.287310  443658 logs.go:282] 0 containers: []
	W1014 19:48:16.287318  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:48:16.287326  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:48:16.287381  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:48:16.314495  443658 cri.go:89] found id: ""
	I1014 19:48:16.314516  443658 logs.go:282] 0 containers: []
	W1014 19:48:16.314525  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:48:16.314531  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:48:16.314602  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:48:16.340741  443658 cri.go:89] found id: ""
	I1014 19:48:16.340772  443658 logs.go:282] 0 containers: []
	W1014 19:48:16.340785  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:48:16.340791  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:48:16.340839  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:48:16.368210  443658 cri.go:89] found id: ""
	I1014 19:48:16.368225  443658 logs.go:282] 0 containers: []
	W1014 19:48:16.368233  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:48:16.368239  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:48:16.368289  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:48:16.394831  443658 cri.go:89] found id: ""
	I1014 19:48:16.394848  443658 logs.go:282] 0 containers: []
	W1014 19:48:16.394858  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:48:16.394865  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:48:16.394922  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:48:16.421594  443658 cri.go:89] found id: ""
	I1014 19:48:16.421614  443658 logs.go:282] 0 containers: []
	W1014 19:48:16.421622  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:48:16.421631  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:48:16.421641  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:48:16.491514  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:48:16.491538  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:48:16.509528  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:48:16.509549  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:48:16.567026  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:48:16.559396    7591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:16.560067    7591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:16.561808    7591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:16.562264    7591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:16.563791    7591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:48:16.559396    7591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:16.560067    7591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:16.561808    7591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:16.562264    7591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:16.563791    7591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:48:16.567039  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:48:16.567050  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:48:16.633705  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:48:16.633729  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:48:19.170176  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:48:19.181543  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:48:19.181597  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:48:19.207369  443658 cri.go:89] found id: ""
	I1014 19:48:19.207386  443658 logs.go:282] 0 containers: []
	W1014 19:48:19.207392  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:48:19.207397  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:48:19.207441  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:48:19.233860  443658 cri.go:89] found id: ""
	I1014 19:48:19.233881  443658 logs.go:282] 0 containers: []
	W1014 19:48:19.233890  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:48:19.233896  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:48:19.233956  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:48:19.260261  443658 cri.go:89] found id: ""
	I1014 19:48:19.260279  443658 logs.go:282] 0 containers: []
	W1014 19:48:19.260287  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:48:19.260293  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:48:19.260346  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:48:19.287494  443658 cri.go:89] found id: ""
	I1014 19:48:19.287515  443658 logs.go:282] 0 containers: []
	W1014 19:48:19.287525  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:48:19.287532  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:48:19.287584  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:48:19.313774  443658 cri.go:89] found id: ""
	I1014 19:48:19.313792  443658 logs.go:282] 0 containers: []
	W1014 19:48:19.313798  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:48:19.313803  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:48:19.313860  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:48:19.340266  443658 cri.go:89] found id: ""
	I1014 19:48:19.340286  443658 logs.go:282] 0 containers: []
	W1014 19:48:19.340296  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:48:19.340305  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:48:19.340371  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:48:19.367478  443658 cri.go:89] found id: ""
	I1014 19:48:19.367494  443658 logs.go:282] 0 containers: []
	W1014 19:48:19.367501  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:48:19.367510  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:48:19.367519  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:48:19.434384  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:48:19.434408  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:48:19.453201  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:48:19.453221  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:48:19.511748  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:48:19.504301    7733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:19.504947    7733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:19.506543    7733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:19.506980    7733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:19.508451    7733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:48:19.504301    7733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:19.504947    7733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:19.506543    7733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:19.506980    7733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:19.508451    7733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:48:19.511771  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:48:19.511786  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:48:19.572669  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:48:19.572694  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:48:22.104359  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:48:22.116056  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:48:22.116114  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:48:22.143506  443658 cri.go:89] found id: ""
	I1014 19:48:22.143526  443658 logs.go:282] 0 containers: []
	W1014 19:48:22.143535  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:48:22.143542  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:48:22.143604  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:48:22.171275  443658 cri.go:89] found id: ""
	I1014 19:48:22.171293  443658 logs.go:282] 0 containers: []
	W1014 19:48:22.171300  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:48:22.171304  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:48:22.171354  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:48:22.200946  443658 cri.go:89] found id: ""
	I1014 19:48:22.200963  443658 logs.go:282] 0 containers: []
	W1014 19:48:22.200969  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:48:22.200975  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:48:22.201021  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:48:22.229821  443658 cri.go:89] found id: ""
	I1014 19:48:22.229838  443658 logs.go:282] 0 containers: []
	W1014 19:48:22.229848  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:48:22.229853  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:48:22.229908  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:48:22.257470  443658 cri.go:89] found id: ""
	I1014 19:48:22.257490  443658 logs.go:282] 0 containers: []
	W1014 19:48:22.257501  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:48:22.257507  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:48:22.257561  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:48:22.286561  443658 cri.go:89] found id: ""
	I1014 19:48:22.286582  443658 logs.go:282] 0 containers: []
	W1014 19:48:22.286590  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:48:22.286640  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:48:22.286708  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:48:22.314642  443658 cri.go:89] found id: ""
	I1014 19:48:22.314659  443658 logs.go:282] 0 containers: []
	W1014 19:48:22.314665  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:48:22.314673  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:48:22.314703  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:48:22.375334  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:48:22.367894    7851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:22.368440    7851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:22.370076    7851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:22.370561    7851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:22.372196    7851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:48:22.367894    7851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:22.368440    7851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:22.370076    7851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:22.370561    7851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:22.372196    7851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:48:22.375355  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:48:22.375369  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:48:22.437367  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:48:22.437393  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:48:22.467945  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:48:22.467963  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:48:22.538691  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:48:22.538715  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:48:25.057422  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:48:25.069417  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:48:25.069480  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:48:25.097308  443658 cri.go:89] found id: ""
	I1014 19:48:25.097327  443658 logs.go:282] 0 containers: []
	W1014 19:48:25.097334  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:48:25.097340  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:48:25.097399  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:48:25.124869  443658 cri.go:89] found id: ""
	I1014 19:48:25.124888  443658 logs.go:282] 0 containers: []
	W1014 19:48:25.124897  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:48:25.124902  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:48:25.124956  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:48:25.151745  443658 cri.go:89] found id: ""
	I1014 19:48:25.151777  443658 logs.go:282] 0 containers: []
	W1014 19:48:25.151788  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:48:25.151794  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:48:25.151851  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:48:25.178827  443658 cri.go:89] found id: ""
	I1014 19:48:25.178847  443658 logs.go:282] 0 containers: []
	W1014 19:48:25.178857  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:48:25.178864  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:48:25.178919  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:48:25.207030  443658 cri.go:89] found id: ""
	I1014 19:48:25.207048  443658 logs.go:282] 0 containers: []
	W1014 19:48:25.207055  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:48:25.207060  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:48:25.207115  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:48:25.234277  443658 cri.go:89] found id: ""
	I1014 19:48:25.234295  443658 logs.go:282] 0 containers: []
	W1014 19:48:25.234302  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:48:25.234307  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:48:25.234351  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:48:25.260062  443658 cri.go:89] found id: ""
	I1014 19:48:25.260079  443658 logs.go:282] 0 containers: []
	W1014 19:48:25.260085  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:48:25.260094  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:48:25.260105  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:48:25.328418  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:48:25.328443  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:48:25.346610  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:48:25.346630  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:48:25.405353  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:48:25.397912    7974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:25.398394    7974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:25.400014    7974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:25.400430    7974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:25.401975    7974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:48:25.397912    7974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:25.398394    7974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:25.400014    7974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:25.400430    7974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:25.401975    7974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:48:25.405366  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:48:25.405378  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:48:25.466377  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:48:25.466403  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:48:27.999561  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:48:28.010893  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:48:28.010948  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:48:28.037673  443658 cri.go:89] found id: ""
	I1014 19:48:28.037692  443658 logs.go:282] 0 containers: []
	W1014 19:48:28.037699  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:48:28.037720  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:48:28.037786  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:48:28.065810  443658 cri.go:89] found id: ""
	I1014 19:48:28.065828  443658 logs.go:282] 0 containers: []
	W1014 19:48:28.065835  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:48:28.065840  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:48:28.065891  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:48:28.093517  443658 cri.go:89] found id: ""
	I1014 19:48:28.093535  443658 logs.go:282] 0 containers: []
	W1014 19:48:28.093542  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:48:28.093547  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:48:28.093594  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:48:28.120885  443658 cri.go:89] found id: ""
	I1014 19:48:28.120907  443658 logs.go:282] 0 containers: []
	W1014 19:48:28.120917  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:48:28.120924  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:48:28.120991  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:48:28.151601  443658 cri.go:89] found id: ""
	I1014 19:48:28.151621  443658 logs.go:282] 0 containers: []
	W1014 19:48:28.151632  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:48:28.151677  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:48:28.151731  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:48:28.179686  443658 cri.go:89] found id: ""
	I1014 19:48:28.179707  443658 logs.go:282] 0 containers: []
	W1014 19:48:28.179718  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:48:28.179725  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:48:28.179796  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:48:28.207048  443658 cri.go:89] found id: ""
	I1014 19:48:28.207065  443658 logs.go:282] 0 containers: []
	W1014 19:48:28.207073  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:48:28.207081  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:48:28.207092  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:48:28.273826  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:48:28.273858  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:48:28.291974  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:48:28.291996  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:48:28.350599  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:48:28.343032    8094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:28.344089    8094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:28.344502    8094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:28.346102    8094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:28.346541    8094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:48:28.343032    8094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:28.344089    8094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:28.344502    8094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:28.346102    8094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:28.346541    8094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:48:28.350610  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:48:28.350620  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:48:28.412963  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:48:28.412999  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:48:30.943653  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:48:30.954861  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:48:30.954918  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:48:30.982663  443658 cri.go:89] found id: ""
	I1014 19:48:30.982687  443658 logs.go:282] 0 containers: []
	W1014 19:48:30.982697  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:48:30.982705  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:48:30.982790  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:48:31.010956  443658 cri.go:89] found id: ""
	I1014 19:48:31.010972  443658 logs.go:282] 0 containers: []
	W1014 19:48:31.010982  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:48:31.010988  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:48:31.011044  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:48:31.037820  443658 cri.go:89] found id: ""
	I1014 19:48:31.037835  443658 logs.go:282] 0 containers: []
	W1014 19:48:31.037845  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:48:31.037851  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:48:31.037908  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:48:31.064198  443658 cri.go:89] found id: ""
	I1014 19:48:31.064219  443658 logs.go:282] 0 containers: []
	W1014 19:48:31.064229  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:48:31.064237  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:48:31.064290  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:48:31.090978  443658 cri.go:89] found id: ""
	I1014 19:48:31.091014  443658 logs.go:282] 0 containers: []
	W1014 19:48:31.091025  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:48:31.091031  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:48:31.091085  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:48:31.119501  443658 cri.go:89] found id: ""
	I1014 19:48:31.119519  443658 logs.go:282] 0 containers: []
	W1014 19:48:31.119526  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:48:31.119531  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:48:31.119578  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:48:31.147180  443658 cri.go:89] found id: ""
	I1014 19:48:31.147202  443658 logs.go:282] 0 containers: []
	W1014 19:48:31.147212  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:48:31.147223  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:48:31.147235  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:48:31.215950  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:48:31.215975  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:48:31.234800  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:48:31.234824  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:48:31.293858  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:48:31.286222    8225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:31.286789    8225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:31.288416    8225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:31.288945    8225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:31.290474    8225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:48:31.286222    8225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:31.286789    8225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:31.288416    8225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:31.288945    8225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:31.290474    8225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:48:31.293875  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:48:31.293886  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:48:31.357651  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:48:31.357679  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:48:33.890973  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:48:33.903698  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:48:33.903750  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:48:33.930766  443658 cri.go:89] found id: ""
	I1014 19:48:33.930786  443658 logs.go:282] 0 containers: []
	W1014 19:48:33.930793  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:48:33.930798  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:48:33.930850  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:48:33.958613  443658 cri.go:89] found id: ""
	I1014 19:48:33.958634  443658 logs.go:282] 0 containers: []
	W1014 19:48:33.958644  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:48:33.958652  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:48:33.958714  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:48:33.985879  443658 cri.go:89] found id: ""
	I1014 19:48:33.985900  443658 logs.go:282] 0 containers: []
	W1014 19:48:33.985908  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:48:33.985913  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:48:33.985969  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:48:34.014311  443658 cri.go:89] found id: ""
	I1014 19:48:34.014330  443658 logs.go:282] 0 containers: []
	W1014 19:48:34.014338  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:48:34.014344  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:48:34.014406  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:48:34.042331  443658 cri.go:89] found id: ""
	I1014 19:48:34.042352  443658 logs.go:282] 0 containers: []
	W1014 19:48:34.042361  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:48:34.042369  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:48:34.042432  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:48:34.070428  443658 cri.go:89] found id: ""
	I1014 19:48:34.070446  443658 logs.go:282] 0 containers: []
	W1014 19:48:34.070456  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:48:34.070463  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:48:34.070517  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:48:34.097884  443658 cri.go:89] found id: ""
	I1014 19:48:34.097903  443658 logs.go:282] 0 containers: []
	W1014 19:48:34.097921  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:48:34.097931  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:48:34.097948  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:48:34.157332  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:48:34.149617    8349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:34.150366    8349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:34.152026    8349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:34.152566    8349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:34.153919    8349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:48:34.149617    8349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:34.150366    8349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:34.152026    8349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:34.152566    8349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:34.153919    8349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:48:34.157346  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:48:34.157361  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:48:34.220371  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:48:34.220398  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:48:34.250307  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:48:34.250325  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:48:34.315972  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:48:34.315994  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:48:36.835436  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:48:36.846681  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:48:36.846733  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:48:36.873365  443658 cri.go:89] found id: ""
	I1014 19:48:36.873381  443658 logs.go:282] 0 containers: []
	W1014 19:48:36.873389  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:48:36.873394  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:48:36.873447  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:48:36.900441  443658 cri.go:89] found id: ""
	I1014 19:48:36.900458  443658 logs.go:282] 0 containers: []
	W1014 19:48:36.900464  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:48:36.900469  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:48:36.900528  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:48:36.928334  443658 cri.go:89] found id: ""
	I1014 19:48:36.928352  443658 logs.go:282] 0 containers: []
	W1014 19:48:36.928359  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:48:36.928364  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:48:36.928432  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:48:36.955215  443658 cri.go:89] found id: ""
	I1014 19:48:36.955234  443658 logs.go:282] 0 containers: []
	W1014 19:48:36.955244  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:48:36.955249  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:48:36.955304  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:48:36.982183  443658 cri.go:89] found id: ""
	I1014 19:48:36.982201  443658 logs.go:282] 0 containers: []
	W1014 19:48:36.982208  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:48:36.982213  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:48:36.982270  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:48:37.009766  443658 cri.go:89] found id: ""
	I1014 19:48:37.009788  443658 logs.go:282] 0 containers: []
	W1014 19:48:37.009798  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:48:37.009803  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:48:37.009852  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:48:37.036432  443658 cri.go:89] found id: ""
	I1014 19:48:37.036454  443658 logs.go:282] 0 containers: []
	W1014 19:48:37.036464  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:48:37.036474  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:48:37.036484  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:48:37.101021  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:48:37.101045  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:48:37.132706  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:48:37.132724  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:48:37.200337  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:48:37.200365  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:48:37.218525  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:48:37.218545  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:48:37.279294  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:48:37.271380    8491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:37.272016    8491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:37.273706    8491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:37.274226    8491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:37.275831    8491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:48:37.271380    8491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:37.272016    8491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:37.273706    8491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:37.274226    8491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:37.275831    8491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:48:39.779639  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:48:39.791242  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:48:39.791305  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:48:39.817960  443658 cri.go:89] found id: ""
	I1014 19:48:39.817977  443658 logs.go:282] 0 containers: []
	W1014 19:48:39.817984  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:48:39.817989  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:48:39.818038  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:48:39.845643  443658 cri.go:89] found id: ""
	I1014 19:48:39.845661  443658 logs.go:282] 0 containers: []
	W1014 19:48:39.845668  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:48:39.845673  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:48:39.845724  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:48:39.872711  443658 cri.go:89] found id: ""
	I1014 19:48:39.872727  443658 logs.go:282] 0 containers: []
	W1014 19:48:39.872734  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:48:39.872738  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:48:39.872815  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:48:39.900683  443658 cri.go:89] found id: ""
	I1014 19:48:39.900705  443658 logs.go:282] 0 containers: []
	W1014 19:48:39.900714  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:48:39.900719  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:48:39.900807  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:48:39.929509  443658 cri.go:89] found id: ""
	I1014 19:48:39.929529  443658 logs.go:282] 0 containers: []
	W1014 19:48:39.929540  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:48:39.929546  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:48:39.929599  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:48:39.955582  443658 cri.go:89] found id: ""
	I1014 19:48:39.955598  443658 logs.go:282] 0 containers: []
	W1014 19:48:39.955605  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:48:39.955610  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:48:39.955657  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:48:39.983710  443658 cri.go:89] found id: ""
	I1014 19:48:39.983727  443658 logs.go:282] 0 containers: []
	W1014 19:48:39.983736  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:48:39.983744  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:48:39.983782  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:48:40.052784  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:48:40.052811  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:48:40.070963  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:48:40.070983  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:48:40.129639  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:48:40.122787    8591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:40.123371    8591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:40.124932    8591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:40.125359    8591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:40.126495    8591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:48:40.122787    8591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:40.123371    8591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:40.124932    8591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:40.125359    8591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:40.126495    8591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:48:40.129685  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:48:40.129697  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:48:40.191333  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:48:40.191359  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:48:42.723817  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:48:42.735282  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:48:42.735333  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:48:42.762376  443658 cri.go:89] found id: ""
	I1014 19:48:42.762395  443658 logs.go:282] 0 containers: []
	W1014 19:48:42.762402  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:48:42.762407  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:48:42.762455  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:48:42.789118  443658 cri.go:89] found id: ""
	I1014 19:48:42.789136  443658 logs.go:282] 0 containers: []
	W1014 19:48:42.789142  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:48:42.789147  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:48:42.789194  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:48:42.816692  443658 cri.go:89] found id: ""
	I1014 19:48:42.816709  443658 logs.go:282] 0 containers: []
	W1014 19:48:42.816717  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:48:42.816721  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:48:42.816787  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:48:42.844094  443658 cri.go:89] found id: ""
	I1014 19:48:42.844111  443658 logs.go:282] 0 containers: []
	W1014 19:48:42.844117  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:48:42.844122  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:48:42.844169  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:48:42.871946  443658 cri.go:89] found id: ""
	I1014 19:48:42.871964  443658 logs.go:282] 0 containers: []
	W1014 19:48:42.871971  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:48:42.871975  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:48:42.872038  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:48:42.899614  443658 cri.go:89] found id: ""
	I1014 19:48:42.899632  443658 logs.go:282] 0 containers: []
	W1014 19:48:42.899638  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:48:42.899643  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:48:42.899689  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:48:42.927253  443658 cri.go:89] found id: ""
	I1014 19:48:42.927269  443658 logs.go:282] 0 containers: []
	W1014 19:48:42.927277  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:48:42.927285  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:48:42.927301  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:48:42.994077  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:48:42.994105  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:48:43.012747  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:48:43.012777  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:48:43.071125  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:48:43.063880    8721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:43.064444    8721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:43.066049    8721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:43.066536    8721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:43.068056    8721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:48:43.063880    8721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:43.064444    8721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:43.066049    8721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:43.066536    8721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:43.068056    8721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:48:43.071145  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:48:43.071157  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:48:43.136102  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:48:43.136125  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:48:45.668732  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:48:45.679980  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:48:45.680041  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:48:45.708000  443658 cri.go:89] found id: ""
	I1014 19:48:45.708030  443658 logs.go:282] 0 containers: []
	W1014 19:48:45.708040  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:48:45.708046  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:48:45.708093  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:48:45.736452  443658 cri.go:89] found id: ""
	I1014 19:48:45.736530  443658 logs.go:282] 0 containers: []
	W1014 19:48:45.736542  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:48:45.736548  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:48:45.736603  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:48:45.764163  443658 cri.go:89] found id: ""
	I1014 19:48:45.764184  443658 logs.go:282] 0 containers: []
	W1014 19:48:45.764194  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:48:45.764201  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:48:45.764259  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:48:45.791827  443658 cri.go:89] found id: ""
	I1014 19:48:45.791842  443658 logs.go:282] 0 containers: []
	W1014 19:48:45.791848  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:48:45.791854  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:48:45.791912  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:48:45.819509  443658 cri.go:89] found id: ""
	I1014 19:48:45.819529  443658 logs.go:282] 0 containers: []
	W1014 19:48:45.819540  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:48:45.819547  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:48:45.819609  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:48:45.847227  443658 cri.go:89] found id: ""
	I1014 19:48:45.847248  443658 logs.go:282] 0 containers: []
	W1014 19:48:45.847259  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:48:45.847266  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:48:45.847329  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:48:45.873974  443658 cri.go:89] found id: ""
	I1014 19:48:45.873995  443658 logs.go:282] 0 containers: []
	W1014 19:48:45.874004  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:48:45.874015  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:48:45.874030  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:48:45.932513  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:48:45.925000    8844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:45.925641    8844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:45.927410    8844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:45.927848    8844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:45.929196    8844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:48:45.925000    8844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:45.925641    8844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:45.927410    8844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:45.927848    8844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:45.929196    8844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:48:45.932528  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:48:45.932545  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:48:45.993477  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:48:45.993504  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:48:46.025620  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:48:46.025638  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:48:46.097209  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:48:46.097236  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:48:48.617067  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:48:48.628616  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:48:48.628683  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:48:48.655361  443658 cri.go:89] found id: ""
	I1014 19:48:48.655377  443658 logs.go:282] 0 containers: []
	W1014 19:48:48.655388  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:48:48.655395  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:48:48.655458  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:48:48.681992  443658 cri.go:89] found id: ""
	I1014 19:48:48.682008  443658 logs.go:282] 0 containers: []
	W1014 19:48:48.682015  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:48:48.682020  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:48:48.682065  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:48:48.708630  443658 cri.go:89] found id: ""
	I1014 19:48:48.708647  443658 logs.go:282] 0 containers: []
	W1014 19:48:48.708654  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:48:48.708658  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:48:48.708726  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:48:48.735832  443658 cri.go:89] found id: ""
	I1014 19:48:48.735848  443658 logs.go:282] 0 containers: []
	W1014 19:48:48.735859  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:48:48.735863  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:48:48.735921  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:48:48.763984  443658 cri.go:89] found id: ""
	I1014 19:48:48.763999  443658 logs.go:282] 0 containers: []
	W1014 19:48:48.764017  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:48:48.764022  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:48:48.764074  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:48:48.790052  443658 cri.go:89] found id: ""
	I1014 19:48:48.790072  443658 logs.go:282] 0 containers: []
	W1014 19:48:48.790081  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:48:48.790088  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:48:48.790137  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:48:48.816830  443658 cri.go:89] found id: ""
	I1014 19:48:48.816847  443658 logs.go:282] 0 containers: []
	W1014 19:48:48.816854  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:48:48.816863  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:48:48.816874  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:48:48.885983  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:48:48.886007  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:48:48.904564  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:48:48.904584  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:48:48.963221  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:48:48.955419    8972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:48.956384    8972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:48.957942    8972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:48.958423    8972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:48.960005    8972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:48:48.955419    8972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:48.956384    8972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:48.957942    8972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:48.958423    8972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:48.960005    8972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:48:48.963232  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:48:48.963245  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:48:49.024076  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:48:49.024100  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:48:51.555915  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:48:51.567493  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:48:51.567566  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:48:51.593927  443658 cri.go:89] found id: ""
	I1014 19:48:51.593943  443658 logs.go:282] 0 containers: []
	W1014 19:48:51.593950  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:48:51.593955  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:48:51.594000  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:48:51.622234  443658 cri.go:89] found id: ""
	I1014 19:48:51.622250  443658 logs.go:282] 0 containers: []
	W1014 19:48:51.622257  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:48:51.622261  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:48:51.622306  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:48:51.648637  443658 cri.go:89] found id: ""
	I1014 19:48:51.648654  443658 logs.go:282] 0 containers: []
	W1014 19:48:51.648660  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:48:51.648666  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:48:51.648730  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:48:51.675538  443658 cri.go:89] found id: ""
	I1014 19:48:51.675559  443658 logs.go:282] 0 containers: []
	W1014 19:48:51.675570  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:48:51.675577  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:48:51.675631  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:48:51.701640  443658 cri.go:89] found id: ""
	I1014 19:48:51.701657  443658 logs.go:282] 0 containers: []
	W1014 19:48:51.701664  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:48:51.701670  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:48:51.701730  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:48:51.729739  443658 cri.go:89] found id: ""
	I1014 19:48:51.729770  443658 logs.go:282] 0 containers: []
	W1014 19:48:51.729782  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:48:51.729789  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:48:51.729839  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:48:51.757162  443658 cri.go:89] found id: ""
	I1014 19:48:51.757184  443658 logs.go:282] 0 containers: []
	W1014 19:48:51.757195  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:48:51.757206  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:48:51.757225  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:48:51.825383  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:48:51.825408  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:48:51.843441  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:48:51.843462  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:48:51.901599  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:48:51.893806    9093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:51.894477    9093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:51.896214    9093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:51.896786    9093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:51.898462    9093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:48:51.893806    9093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:51.894477    9093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:51.896214    9093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:51.896786    9093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:51.898462    9093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:48:51.901609  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:48:51.901621  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:48:51.963670  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:48:51.963696  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:48:54.494451  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:48:54.505690  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:48:54.505748  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:48:54.532934  443658 cri.go:89] found id: ""
	I1014 19:48:54.532956  443658 logs.go:282] 0 containers: []
	W1014 19:48:54.532966  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:48:54.532973  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:48:54.533035  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:48:54.560665  443658 cri.go:89] found id: ""
	I1014 19:48:54.560682  443658 logs.go:282] 0 containers: []
	W1014 19:48:54.560689  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:48:54.560693  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:48:54.560746  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:48:54.587851  443658 cri.go:89] found id: ""
	I1014 19:48:54.587871  443658 logs.go:282] 0 containers: []
	W1014 19:48:54.587882  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:48:54.587889  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:48:54.587939  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:48:54.615307  443658 cri.go:89] found id: ""
	I1014 19:48:54.615324  443658 logs.go:282] 0 containers: []
	W1014 19:48:54.615331  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:48:54.615336  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:48:54.615381  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:48:54.642900  443658 cri.go:89] found id: ""
	I1014 19:48:54.642916  443658 logs.go:282] 0 containers: []
	W1014 19:48:54.642922  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:48:54.642928  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:48:54.642987  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:48:54.670686  443658 cri.go:89] found id: ""
	I1014 19:48:54.670702  443658 logs.go:282] 0 containers: []
	W1014 19:48:54.670710  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:48:54.670715  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:48:54.670784  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:48:54.697226  443658 cri.go:89] found id: ""
	I1014 19:48:54.697246  443658 logs.go:282] 0 containers: []
	W1014 19:48:54.697255  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:48:54.697266  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:48:54.697280  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:48:54.759777  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:48:54.759804  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:48:54.790599  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:48:54.790617  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:48:54.864057  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:48:54.864090  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:48:54.882103  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:48:54.882128  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:48:54.942079  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:48:54.934581    9243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:54.935124    9243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:54.936659    9243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:54.937300    9243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:54.938843    9243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:48:54.934581    9243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:54.935124    9243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:54.936659    9243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:54.937300    9243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:54.938843    9243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:48:57.443958  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:48:57.455537  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:48:57.455596  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:48:57.482660  443658 cri.go:89] found id: ""
	I1014 19:48:57.482684  443658 logs.go:282] 0 containers: []
	W1014 19:48:57.482694  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:48:57.482704  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:48:57.482783  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:48:57.510445  443658 cri.go:89] found id: ""
	I1014 19:48:57.510461  443658 logs.go:282] 0 containers: []
	W1014 19:48:57.510467  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:48:57.510471  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:48:57.510523  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:48:57.537439  443658 cri.go:89] found id: ""
	I1014 19:48:57.537456  443658 logs.go:282] 0 containers: []
	W1014 19:48:57.537464  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:48:57.537469  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:48:57.537515  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:48:57.564369  443658 cri.go:89] found id: ""
	I1014 19:48:57.564386  443658 logs.go:282] 0 containers: []
	W1014 19:48:57.564394  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:48:57.564401  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:48:57.564455  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:48:57.591584  443658 cri.go:89] found id: ""
	I1014 19:48:57.591601  443658 logs.go:282] 0 containers: []
	W1014 19:48:57.591607  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:48:57.591612  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:48:57.591657  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:48:57.620996  443658 cri.go:89] found id: ""
	I1014 19:48:57.621016  443658 logs.go:282] 0 containers: []
	W1014 19:48:57.621026  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:48:57.621033  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:48:57.621096  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:48:57.650978  443658 cri.go:89] found id: ""
	I1014 19:48:57.650994  443658 logs.go:282] 0 containers: []
	W1014 19:48:57.651001  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:48:57.651010  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:48:57.651022  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:48:57.709879  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:48:57.701644    9354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:57.702204    9354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:57.704523    9354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:57.705023    9354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:57.706491    9354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:48:57.701644    9354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:57.702204    9354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:57.704523    9354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:57.705023    9354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:57.706491    9354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:48:57.709895  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:48:57.709906  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:48:57.773086  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:48:57.773110  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:48:57.804357  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:48:57.804375  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:48:57.876116  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:48:57.876141  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:49:00.397550  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:49:00.408833  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:49:00.408898  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:49:00.436551  443658 cri.go:89] found id: ""
	I1014 19:49:00.436572  443658 logs.go:282] 0 containers: []
	W1014 19:49:00.436580  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:49:00.436586  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:49:00.436643  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:49:00.463380  443658 cri.go:89] found id: ""
	I1014 19:49:00.463398  443658 logs.go:282] 0 containers: []
	W1014 19:49:00.463406  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:49:00.463411  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:49:00.463464  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:49:00.489936  443658 cri.go:89] found id: ""
	I1014 19:49:00.489953  443658 logs.go:282] 0 containers: []
	W1014 19:49:00.489961  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:49:00.489967  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:49:00.490025  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:49:00.517733  443658 cri.go:89] found id: ""
	I1014 19:49:00.517777  443658 logs.go:282] 0 containers: []
	W1014 19:49:00.517789  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:49:00.517799  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:49:00.517853  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:49:00.545738  443658 cri.go:89] found id: ""
	I1014 19:49:00.545770  443658 logs.go:282] 0 containers: []
	W1014 19:49:00.545782  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:49:00.545789  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:49:00.545847  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:49:00.572980  443658 cri.go:89] found id: ""
	I1014 19:49:00.572998  443658 logs.go:282] 0 containers: []
	W1014 19:49:00.573007  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:49:00.573013  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:49:00.573073  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:49:00.601579  443658 cri.go:89] found id: ""
	I1014 19:49:00.601596  443658 logs.go:282] 0 containers: []
	W1014 19:49:00.601608  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:49:00.601620  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:49:00.601634  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:49:00.664237  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:49:00.664264  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:49:00.696881  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:49:00.696906  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:49:00.769175  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:49:00.769201  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:49:00.787483  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:49:00.787504  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:49:00.845998  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:49:00.838686    9495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:00.839226    9495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:00.840825    9495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:00.841284    9495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:00.842865    9495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:49:00.838686    9495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:00.839226    9495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:00.840825    9495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:00.841284    9495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:00.842865    9495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:49:03.347716  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:49:03.359494  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:49:03.359550  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:49:03.387814  443658 cri.go:89] found id: ""
	I1014 19:49:03.387833  443658 logs.go:282] 0 containers: []
	W1014 19:49:03.387842  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:49:03.387848  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:49:03.387913  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:49:03.416379  443658 cri.go:89] found id: ""
	I1014 19:49:03.416400  443658 logs.go:282] 0 containers: []
	W1014 19:49:03.416410  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:49:03.416415  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:49:03.416466  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:49:03.444338  443658 cri.go:89] found id: ""
	I1014 19:49:03.444355  443658 logs.go:282] 0 containers: []
	W1014 19:49:03.444364  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:49:03.444368  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:49:03.444429  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:49:03.472283  443658 cri.go:89] found id: ""
	I1014 19:49:03.472299  443658 logs.go:282] 0 containers: []
	W1014 19:49:03.472306  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:49:03.472311  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:49:03.472368  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:49:03.499924  443658 cri.go:89] found id: ""
	I1014 19:49:03.499940  443658 logs.go:282] 0 containers: []
	W1014 19:49:03.499947  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:49:03.499951  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:49:03.500014  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:49:03.528675  443658 cri.go:89] found id: ""
	I1014 19:49:03.528691  443658 logs.go:282] 0 containers: []
	W1014 19:49:03.528698  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:49:03.528703  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:49:03.528780  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:49:03.555961  443658 cri.go:89] found id: ""
	I1014 19:49:03.555979  443658 logs.go:282] 0 containers: []
	W1014 19:49:03.555986  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:49:03.555995  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:49:03.556009  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:49:03.615676  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:49:03.608021    9591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:03.608674    9591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:03.610310    9591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:03.610821    9591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:03.612076    9591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:49:03.608021    9591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:03.608674    9591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:03.610310    9591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:03.610821    9591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:03.612076    9591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:49:03.615687  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:49:03.615699  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:49:03.680122  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:49:03.680151  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:49:03.712091  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:49:03.712109  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:49:03.779370  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:49:03.779396  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:49:06.297908  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:49:06.309773  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:49:06.309831  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:49:06.337910  443658 cri.go:89] found id: ""
	I1014 19:49:06.337930  443658 logs.go:282] 0 containers: []
	W1014 19:49:06.337939  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:49:06.337946  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:49:06.337996  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:49:06.366075  443658 cri.go:89] found id: ""
	I1014 19:49:06.366090  443658 logs.go:282] 0 containers: []
	W1014 19:49:06.366097  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:49:06.366102  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:49:06.366149  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:49:06.393203  443658 cri.go:89] found id: ""
	I1014 19:49:06.393219  443658 logs.go:282] 0 containers: []
	W1014 19:49:06.393225  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:49:06.393230  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:49:06.393274  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:49:06.421220  443658 cri.go:89] found id: ""
	I1014 19:49:06.421240  443658 logs.go:282] 0 containers: []
	W1014 19:49:06.421250  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:49:06.421257  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:49:06.421322  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:49:06.449354  443658 cri.go:89] found id: ""
	I1014 19:49:06.449373  443658 logs.go:282] 0 containers: []
	W1014 19:49:06.449382  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:49:06.449388  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:49:06.449450  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:49:06.476432  443658 cri.go:89] found id: ""
	I1014 19:49:06.476450  443658 logs.go:282] 0 containers: []
	W1014 19:49:06.476459  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:49:06.476467  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:49:06.476536  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:49:06.504006  443658 cri.go:89] found id: ""
	I1014 19:49:06.504031  443658 logs.go:282] 0 containers: []
	W1014 19:49:06.504038  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:49:06.504047  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:49:06.504057  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:49:06.533877  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:49:06.533894  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:49:06.600597  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:49:06.600622  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:49:06.619193  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:49:06.619216  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:49:06.680047  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:49:06.672165    9737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:06.672728    9737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:06.674412    9737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:06.675003    9737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:06.676679    9737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:49:06.672165    9737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:06.672728    9737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:06.674412    9737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:06.675003    9737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:06.676679    9737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:49:06.680057  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:49:06.680069  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:49:09.242233  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:49:09.253413  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:49:09.253465  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:49:09.280670  443658 cri.go:89] found id: ""
	I1014 19:49:09.280688  443658 logs.go:282] 0 containers: []
	W1014 19:49:09.280698  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:49:09.280705  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:49:09.280776  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:49:09.307015  443658 cri.go:89] found id: ""
	I1014 19:49:09.307033  443658 logs.go:282] 0 containers: []
	W1014 19:49:09.307043  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:49:09.307049  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:49:09.307104  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:49:09.334276  443658 cri.go:89] found id: ""
	I1014 19:49:09.334296  443658 logs.go:282] 0 containers: []
	W1014 19:49:09.334304  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:49:09.334309  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:49:09.334357  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:49:09.360472  443658 cri.go:89] found id: ""
	I1014 19:49:09.360487  443658 logs.go:282] 0 containers: []
	W1014 19:49:09.360494  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:49:09.360499  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:49:09.360549  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:49:09.388322  443658 cri.go:89] found id: ""
	I1014 19:49:09.388338  443658 logs.go:282] 0 containers: []
	W1014 19:49:09.388345  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:49:09.388349  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:49:09.388396  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:49:09.414924  443658 cri.go:89] found id: ""
	I1014 19:49:09.414944  443658 logs.go:282] 0 containers: []
	W1014 19:49:09.414955  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:49:09.414962  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:49:09.415023  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:49:09.441772  443658 cri.go:89] found id: ""
	I1014 19:49:09.441792  443658 logs.go:282] 0 containers: []
	W1014 19:49:09.441800  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:49:09.441809  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:49:09.441822  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:49:09.509426  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:49:09.509452  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:49:09.527807  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:49:09.527829  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:49:09.587241  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:49:09.579349    9843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:09.579944    9843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:09.582253    9843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:09.582735    9843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:09.583971    9843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:49:09.579349    9843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:09.579944    9843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:09.582253    9843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:09.582735    9843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:09.583971    9843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:49:09.587253  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:49:09.587265  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:49:09.654561  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:49:09.654584  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:49:12.186794  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:49:12.198312  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:49:12.198367  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:49:12.225457  443658 cri.go:89] found id: ""
	I1014 19:49:12.225476  443658 logs.go:282] 0 containers: []
	W1014 19:49:12.225491  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:49:12.225497  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:49:12.225548  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:49:12.253224  443658 cri.go:89] found id: ""
	I1014 19:49:12.253243  443658 logs.go:282] 0 containers: []
	W1014 19:49:12.253251  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:49:12.253256  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:49:12.253317  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:49:12.280591  443658 cri.go:89] found id: ""
	I1014 19:49:12.280610  443658 logs.go:282] 0 containers: []
	W1014 19:49:12.280617  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:49:12.280622  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:49:12.280674  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:49:12.309016  443658 cri.go:89] found id: ""
	I1014 19:49:12.309033  443658 logs.go:282] 0 containers: []
	W1014 19:49:12.309039  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:49:12.309044  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:49:12.309091  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:49:12.337230  443658 cri.go:89] found id: ""
	I1014 19:49:12.337251  443658 logs.go:282] 0 containers: []
	W1014 19:49:12.337260  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:49:12.337267  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:49:12.337336  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:49:12.364682  443658 cri.go:89] found id: ""
	I1014 19:49:12.364728  443658 logs.go:282] 0 containers: []
	W1014 19:49:12.364737  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:49:12.364743  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:49:12.364821  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:49:12.392936  443658 cri.go:89] found id: ""
	I1014 19:49:12.392960  443658 logs.go:282] 0 containers: []
	W1014 19:49:12.392967  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:49:12.392976  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:49:12.392986  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:49:12.452595  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:49:12.444355    9971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:12.444853    9971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:12.446438    9971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:12.447015    9971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:12.449368    9971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:49:12.444355    9971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:12.444853    9971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:12.446438    9971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:12.447015    9971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:12.449368    9971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:49:12.452608  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:49:12.452621  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:49:12.516437  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:49:12.516463  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:49:12.547372  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:49:12.547391  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:49:12.614937  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:49:12.614961  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:49:15.134260  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:49:15.146546  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:49:15.146600  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:49:15.174510  443658 cri.go:89] found id: ""
	I1014 19:49:15.174526  443658 logs.go:282] 0 containers: []
	W1014 19:49:15.174533  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:49:15.174538  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:49:15.174585  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:49:15.202132  443658 cri.go:89] found id: ""
	I1014 19:49:15.202152  443658 logs.go:282] 0 containers: []
	W1014 19:49:15.202162  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:49:15.202169  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:49:15.202226  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:49:15.230616  443658 cri.go:89] found id: ""
	I1014 19:49:15.230633  443658 logs.go:282] 0 containers: []
	W1014 19:49:15.230639  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:49:15.230644  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:49:15.230696  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:49:15.258236  443658 cri.go:89] found id: ""
	I1014 19:49:15.258253  443658 logs.go:282] 0 containers: []
	W1014 19:49:15.258263  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:49:15.258267  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:49:15.258326  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:49:15.286042  443658 cri.go:89] found id: ""
	I1014 19:49:15.286059  443658 logs.go:282] 0 containers: []
	W1014 19:49:15.286066  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:49:15.286072  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:49:15.286134  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:49:15.314815  443658 cri.go:89] found id: ""
	I1014 19:49:15.314833  443658 logs.go:282] 0 containers: []
	W1014 19:49:15.314840  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:49:15.314844  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:49:15.314897  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:49:15.341953  443658 cri.go:89] found id: ""
	I1014 19:49:15.341969  443658 logs.go:282] 0 containers: []
	W1014 19:49:15.341976  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:49:15.341984  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:49:15.341995  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:49:15.412363  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:49:15.412387  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:49:15.430737  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:49:15.430770  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:49:15.492263  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:49:15.483535   10099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:15.484124   10099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:15.485892   10099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:15.486398   10099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:15.489083   10099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:49:15.483535   10099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:15.484124   10099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:15.485892   10099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:15.486398   10099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:15.489083   10099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:49:15.492274  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:49:15.492286  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:49:15.556874  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:49:15.556899  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:49:18.089267  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:49:18.101164  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:49:18.101225  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:49:18.130411  443658 cri.go:89] found id: ""
	I1014 19:49:18.130428  443658 logs.go:282] 0 containers: []
	W1014 19:49:18.130435  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:49:18.130440  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:49:18.130500  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:49:18.157908  443658 cri.go:89] found id: ""
	I1014 19:49:18.157927  443658 logs.go:282] 0 containers: []
	W1014 19:49:18.157938  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:49:18.157943  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:49:18.157997  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:49:18.185537  443658 cri.go:89] found id: ""
	I1014 19:49:18.185560  443658 logs.go:282] 0 containers: []
	W1014 19:49:18.185568  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:49:18.185573  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:49:18.185627  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:49:18.212466  443658 cri.go:89] found id: ""
	I1014 19:49:18.212485  443658 logs.go:282] 0 containers: []
	W1014 19:49:18.212493  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:49:18.212498  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:49:18.212561  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:49:18.239975  443658 cri.go:89] found id: ""
	I1014 19:49:18.239993  443658 logs.go:282] 0 containers: []
	W1014 19:49:18.240000  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:49:18.240005  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:49:18.240056  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:49:18.267082  443658 cri.go:89] found id: ""
	I1014 19:49:18.267101  443658 logs.go:282] 0 containers: []
	W1014 19:49:18.267109  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:49:18.267114  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:49:18.267163  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:49:18.293654  443658 cri.go:89] found id: ""
	I1014 19:49:18.293672  443658 logs.go:282] 0 containers: []
	W1014 19:49:18.293679  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:49:18.293689  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:49:18.293700  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:49:18.363853  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:49:18.363878  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:49:18.383522  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:49:18.383545  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:49:18.442304  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:49:18.435285   10216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:18.435849   10216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:18.437451   10216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:18.437904   10216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:18.438994   10216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:49:18.435285   10216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:18.435849   10216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:18.437451   10216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:18.437904   10216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:18.438994   10216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:49:18.442316  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:49:18.442327  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:49:18.503728  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:49:18.503752  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:49:21.035160  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:49:21.046500  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:49:21.046556  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:49:21.073686  443658 cri.go:89] found id: ""
	I1014 19:49:21.073705  443658 logs.go:282] 0 containers: []
	W1014 19:49:21.073716  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:49:21.073723  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:49:21.073790  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:49:21.100037  443658 cri.go:89] found id: ""
	I1014 19:49:21.100052  443658 logs.go:282] 0 containers: []
	W1014 19:49:21.100059  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:49:21.100064  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:49:21.100107  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:49:21.127167  443658 cri.go:89] found id: ""
	I1014 19:49:21.127183  443658 logs.go:282] 0 containers: []
	W1014 19:49:21.127190  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:49:21.127195  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:49:21.127243  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:49:21.155028  443658 cri.go:89] found id: ""
	I1014 19:49:21.155045  443658 logs.go:282] 0 containers: []
	W1014 19:49:21.155052  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:49:21.155056  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:49:21.155104  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:49:21.182898  443658 cri.go:89] found id: ""
	I1014 19:49:21.182919  443658 logs.go:282] 0 containers: []
	W1014 19:49:21.182926  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:49:21.182931  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:49:21.182981  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:49:21.214304  443658 cri.go:89] found id: ""
	I1014 19:49:21.214321  443658 logs.go:282] 0 containers: []
	W1014 19:49:21.214327  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:49:21.214332  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:49:21.214377  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:49:21.242021  443658 cri.go:89] found id: ""
	I1014 19:49:21.242038  443658 logs.go:282] 0 containers: []
	W1014 19:49:21.242045  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:49:21.242053  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:49:21.242065  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:49:21.259561  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:49:21.259582  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:49:21.319723  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:49:21.312041   10338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:21.312668   10338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:21.314370   10338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:21.314958   10338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:21.316607   10338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:49:21.312041   10338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:21.312668   10338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:21.314370   10338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:21.314958   10338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:21.316607   10338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:49:21.319734  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:49:21.319745  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:49:21.380339  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:49:21.380373  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:49:21.410561  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:49:21.410580  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:49:23.982170  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:49:23.993512  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:49:23.993566  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:49:24.021666  443658 cri.go:89] found id: ""
	I1014 19:49:24.021681  443658 logs.go:282] 0 containers: []
	W1014 19:49:24.021688  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:49:24.021693  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:49:24.021777  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:49:24.048763  443658 cri.go:89] found id: ""
	I1014 19:49:24.048788  443658 logs.go:282] 0 containers: []
	W1014 19:49:24.048799  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:49:24.048806  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:49:24.048868  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:49:24.076823  443658 cri.go:89] found id: ""
	I1014 19:49:24.076845  443658 logs.go:282] 0 containers: []
	W1014 19:49:24.076856  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:49:24.076862  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:49:24.076920  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:49:24.104097  443658 cri.go:89] found id: ""
	I1014 19:49:24.104117  443658 logs.go:282] 0 containers: []
	W1014 19:49:24.104126  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:49:24.104130  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:49:24.104182  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:49:24.130667  443658 cri.go:89] found id: ""
	I1014 19:49:24.130682  443658 logs.go:282] 0 containers: []
	W1014 19:49:24.130691  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:49:24.130696  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:49:24.130747  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:49:24.158412  443658 cri.go:89] found id: ""
	I1014 19:49:24.158429  443658 logs.go:282] 0 containers: []
	W1014 19:49:24.158437  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:49:24.158442  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:49:24.158491  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:49:24.185765  443658 cri.go:89] found id: ""
	I1014 19:49:24.185785  443658 logs.go:282] 0 containers: []
	W1014 19:49:24.185793  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:49:24.185801  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:49:24.185813  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:49:24.244433  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:49:24.236694   10471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:24.237287   10471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:24.238941   10471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:24.239414   10471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:24.240968   10471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:49:24.236694   10471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:24.237287   10471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:24.238941   10471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:24.239414   10471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:24.240968   10471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:49:24.244454  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:49:24.244469  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:49:24.307235  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:49:24.307260  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:49:24.337358  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:49:24.337379  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:49:24.406396  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:49:24.406421  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:49:26.925678  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:49:26.936862  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:49:26.936911  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:49:26.963233  443658 cri.go:89] found id: ""
	I1014 19:49:26.963249  443658 logs.go:282] 0 containers: []
	W1014 19:49:26.963256  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:49:26.963261  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:49:26.963318  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:49:26.989526  443658 cri.go:89] found id: ""
	I1014 19:49:26.989545  443658 logs.go:282] 0 containers: []
	W1014 19:49:26.989553  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:49:26.989558  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:49:26.989606  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:49:27.016445  443658 cri.go:89] found id: ""
	I1014 19:49:27.016461  443658 logs.go:282] 0 containers: []
	W1014 19:49:27.016468  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:49:27.016473  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:49:27.016536  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:49:27.044936  443658 cri.go:89] found id: ""
	I1014 19:49:27.044954  443658 logs.go:282] 0 containers: []
	W1014 19:49:27.044961  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:49:27.044965  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:49:27.045023  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:49:27.071859  443658 cri.go:89] found id: ""
	I1014 19:49:27.071881  443658 logs.go:282] 0 containers: []
	W1014 19:49:27.071891  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:49:27.071898  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:49:27.071964  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:49:27.101404  443658 cri.go:89] found id: ""
	I1014 19:49:27.101421  443658 logs.go:282] 0 containers: []
	W1014 19:49:27.101431  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:49:27.101439  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:49:27.101492  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:49:27.130140  443658 cri.go:89] found id: ""
	I1014 19:49:27.130158  443658 logs.go:282] 0 containers: []
	W1014 19:49:27.130168  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:49:27.130178  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:49:27.130192  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:49:27.191223  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:49:27.183739   10583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:27.184372   10583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:27.185983   10583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:27.186439   10583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:27.188034   10583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:49:27.183739   10583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:27.184372   10583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:27.185983   10583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:27.186439   10583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:27.188034   10583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:49:27.191237  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:49:27.191249  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:49:27.255430  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:49:27.255456  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:49:27.285702  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:49:27.285740  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:49:27.352209  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:49:27.352234  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:49:29.872354  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:49:29.883680  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:49:29.883735  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:49:29.911601  443658 cri.go:89] found id: ""
	I1014 19:49:29.911621  443658 logs.go:282] 0 containers: []
	W1014 19:49:29.911628  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:49:29.911634  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:49:29.911681  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:49:29.940396  443658 cri.go:89] found id: ""
	I1014 19:49:29.940412  443658 logs.go:282] 0 containers: []
	W1014 19:49:29.940419  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:49:29.940424  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:49:29.940471  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:49:29.969195  443658 cri.go:89] found id: ""
	I1014 19:49:29.969213  443658 logs.go:282] 0 containers: []
	W1014 19:49:29.969220  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:49:29.969225  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:49:29.969275  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:49:29.997694  443658 cri.go:89] found id: ""
	I1014 19:49:29.997715  443658 logs.go:282] 0 containers: []
	W1014 19:49:29.997725  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:49:29.997732  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:49:29.997818  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:49:30.027488  443658 cri.go:89] found id: ""
	I1014 19:49:30.027506  443658 logs.go:282] 0 containers: []
	W1014 19:49:30.027514  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:49:30.027518  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:49:30.027568  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:49:30.054599  443658 cri.go:89] found id: ""
	I1014 19:49:30.054617  443658 logs.go:282] 0 containers: []
	W1014 19:49:30.054625  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:49:30.054630  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:49:30.054709  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:49:30.081817  443658 cri.go:89] found id: ""
	I1014 19:49:30.081833  443658 logs.go:282] 0 containers: []
	W1014 19:49:30.081843  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:49:30.081854  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:49:30.081870  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:49:30.145428  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:49:30.145454  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:49:30.177045  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:49:30.177064  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:49:30.244236  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:49:30.244263  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:49:30.262247  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:49:30.262268  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:49:30.320401  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:49:30.313011   10727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:30.313520   10727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:30.315086   10727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:30.315515   10727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:30.317170   10727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:49:30.313011   10727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:30.313520   10727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:30.315086   10727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:30.315515   10727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:30.317170   10727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:49:32.822227  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:49:32.833616  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:49:32.833715  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:49:32.861467  443658 cri.go:89] found id: ""
	I1014 19:49:32.861484  443658 logs.go:282] 0 containers: []
	W1014 19:49:32.861493  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:49:32.861499  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:49:32.861567  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:49:32.889541  443658 cri.go:89] found id: ""
	I1014 19:49:32.889559  443658 logs.go:282] 0 containers: []
	W1014 19:49:32.889566  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:49:32.889571  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:49:32.889616  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:49:32.915877  443658 cri.go:89] found id: ""
	I1014 19:49:32.915896  443658 logs.go:282] 0 containers: []
	W1014 19:49:32.915904  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:49:32.915908  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:49:32.915969  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:49:32.943538  443658 cri.go:89] found id: ""
	I1014 19:49:32.943558  443658 logs.go:282] 0 containers: []
	W1014 19:49:32.943568  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:49:32.943573  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:49:32.943635  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:49:32.969493  443658 cri.go:89] found id: ""
	I1014 19:49:32.969511  443658 logs.go:282] 0 containers: []
	W1014 19:49:32.969518  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:49:32.969523  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:49:32.969581  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:49:32.995650  443658 cri.go:89] found id: ""
	I1014 19:49:32.995671  443658 logs.go:282] 0 containers: []
	W1014 19:49:32.995679  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:49:32.995684  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:49:32.995765  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:49:33.023836  443658 cri.go:89] found id: ""
	I1014 19:49:33.023856  443658 logs.go:282] 0 containers: []
	W1014 19:49:33.023866  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:49:33.023876  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:49:33.023889  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:49:33.054135  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:49:33.054157  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:49:33.120594  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:49:33.120618  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:49:33.138783  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:49:33.138803  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:49:33.197459  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:49:33.189973   10842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:33.190463   10842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:33.192089   10842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:33.192508   10842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:33.194210   10842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:49:33.189973   10842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:33.190463   10842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:33.192089   10842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:33.192508   10842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:33.194210   10842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:49:33.197473  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:49:33.197483  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:49:35.763533  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:49:35.775555  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:49:35.775604  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:49:35.802773  443658 cri.go:89] found id: ""
	I1014 19:49:35.802794  443658 logs.go:282] 0 containers: []
	W1014 19:49:35.802800  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:49:35.802805  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:49:35.802853  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:49:35.830466  443658 cri.go:89] found id: ""
	I1014 19:49:35.830481  443658 logs.go:282] 0 containers: []
	W1014 19:49:35.830488  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:49:35.830499  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:49:35.830545  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:49:35.857322  443658 cri.go:89] found id: ""
	I1014 19:49:35.857342  443658 logs.go:282] 0 containers: []
	W1014 19:49:35.857350  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:49:35.857354  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:49:35.857407  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:49:35.884681  443658 cri.go:89] found id: ""
	I1014 19:49:35.884705  443658 logs.go:282] 0 containers: []
	W1014 19:49:35.884711  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:49:35.884717  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:49:35.884785  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:49:35.913187  443658 cri.go:89] found id: ""
	I1014 19:49:35.913205  443658 logs.go:282] 0 containers: []
	W1014 19:49:35.913212  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:49:35.913219  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:49:35.913284  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:49:35.941275  443658 cri.go:89] found id: ""
	I1014 19:49:35.941296  443658 logs.go:282] 0 containers: []
	W1014 19:49:35.941306  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:49:35.941312  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:49:35.941404  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:49:35.968221  443658 cri.go:89] found id: ""
	I1014 19:49:35.968242  443658 logs.go:282] 0 containers: []
	W1014 19:49:35.968249  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:49:35.968258  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:49:35.968269  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:49:35.997909  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:49:35.997926  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:49:36.065160  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:49:36.065186  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:49:36.084069  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:49:36.084094  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:49:36.143710  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:49:36.136552   10974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:36.137091   10974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:36.138749   10974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:36.139231   10974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:36.140429   10974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:49:36.136552   10974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:36.137091   10974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:36.138749   10974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:36.139231   10974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:36.140429   10974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:49:36.143728  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:49:36.143743  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:49:38.705714  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:49:38.717101  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:49:38.717153  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:49:38.743695  443658 cri.go:89] found id: ""
	I1014 19:49:38.743711  443658 logs.go:282] 0 containers: []
	W1014 19:49:38.743720  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:49:38.743725  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:49:38.743801  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:49:38.771046  443658 cri.go:89] found id: ""
	I1014 19:49:38.771062  443658 logs.go:282] 0 containers: []
	W1014 19:49:38.771069  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:49:38.771074  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:49:38.771120  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:49:38.798553  443658 cri.go:89] found id: ""
	I1014 19:49:38.798569  443658 logs.go:282] 0 containers: []
	W1014 19:49:38.798579  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:49:38.798585  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:49:38.798651  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:49:38.825740  443658 cri.go:89] found id: ""
	I1014 19:49:38.825773  443658 logs.go:282] 0 containers: []
	W1014 19:49:38.825784  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:49:38.825790  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:49:38.825842  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:49:38.852044  443658 cri.go:89] found id: ""
	I1014 19:49:38.852063  443658 logs.go:282] 0 containers: []
	W1014 19:49:38.852074  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:49:38.852081  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:49:38.852138  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:49:38.879494  443658 cri.go:89] found id: ""
	I1014 19:49:38.879511  443658 logs.go:282] 0 containers: []
	W1014 19:49:38.879519  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:49:38.879524  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:49:38.879572  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:49:38.908560  443658 cri.go:89] found id: ""
	I1014 19:49:38.908579  443658 logs.go:282] 0 containers: []
	W1014 19:49:38.908587  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:49:38.908597  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:49:38.908608  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:49:38.967381  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:49:38.960253   11084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:38.960835   11084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:38.962461   11084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:38.962872   11084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:38.964250   11084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:49:38.960253   11084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:38.960835   11084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:38.962461   11084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:38.962872   11084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:38.964250   11084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:49:38.967392  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:49:38.967407  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:49:39.029751  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:49:39.029782  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:49:39.060387  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:49:39.060407  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:49:39.131578  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:49:39.131603  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:49:41.650879  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:49:41.662649  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:49:41.662714  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:49:41.690616  443658 cri.go:89] found id: ""
	I1014 19:49:41.690632  443658 logs.go:282] 0 containers: []
	W1014 19:49:41.690639  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:49:41.690644  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:49:41.690726  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:49:41.717290  443658 cri.go:89] found id: ""
	I1014 19:49:41.717307  443658 logs.go:282] 0 containers: []
	W1014 19:49:41.717315  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:49:41.717319  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:49:41.717370  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:49:41.744219  443658 cri.go:89] found id: ""
	I1014 19:49:41.744235  443658 logs.go:282] 0 containers: []
	W1014 19:49:41.744242  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:49:41.744247  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:49:41.744291  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:49:41.771856  443658 cri.go:89] found id: ""
	I1014 19:49:41.771874  443658 logs.go:282] 0 containers: []
	W1014 19:49:41.771881  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:49:41.771886  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:49:41.771933  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:49:41.798980  443658 cri.go:89] found id: ""
	I1014 19:49:41.798997  443658 logs.go:282] 0 containers: []
	W1014 19:49:41.799008  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:49:41.799014  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:49:41.799082  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:49:41.824815  443658 cri.go:89] found id: ""
	I1014 19:49:41.824833  443658 logs.go:282] 0 containers: []
	W1014 19:49:41.824841  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:49:41.824847  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:49:41.824910  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:49:41.853352  443658 cri.go:89] found id: ""
	I1014 19:49:41.853369  443658 logs.go:282] 0 containers: []
	W1014 19:49:41.853377  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:49:41.853385  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:49:41.853397  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:49:41.871201  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:49:41.871221  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:49:41.931818  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:49:41.924117   11199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:41.924656   11199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:41.926161   11199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:41.926706   11199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:41.928205   11199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:49:41.924117   11199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:41.924656   11199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:41.926161   11199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:41.926706   11199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:41.928205   11199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:49:41.931829  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:49:41.931839  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:49:41.997739  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:49:41.997769  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:49:42.030107  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:49:42.030126  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:49:44.596638  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:49:44.608335  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:49:44.608403  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:49:44.636505  443658 cri.go:89] found id: ""
	I1014 19:49:44.636523  443658 logs.go:282] 0 containers: []
	W1014 19:49:44.636530  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:49:44.636535  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:49:44.636592  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:49:44.663068  443658 cri.go:89] found id: ""
	I1014 19:49:44.663085  443658 logs.go:282] 0 containers: []
	W1014 19:49:44.663091  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:49:44.663097  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:49:44.663156  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:49:44.691243  443658 cri.go:89] found id: ""
	I1014 19:49:44.691259  443658 logs.go:282] 0 containers: []
	W1014 19:49:44.691265  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:49:44.691270  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:49:44.691329  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:49:44.718866  443658 cri.go:89] found id: ""
	I1014 19:49:44.718889  443658 logs.go:282] 0 containers: []
	W1014 19:49:44.718900  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:49:44.718907  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:49:44.718964  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:49:44.746897  443658 cri.go:89] found id: ""
	I1014 19:49:44.746918  443658 logs.go:282] 0 containers: []
	W1014 19:49:44.746926  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:49:44.746930  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:49:44.746982  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:49:44.775031  443658 cri.go:89] found id: ""
	I1014 19:49:44.775049  443658 logs.go:282] 0 containers: []
	W1014 19:49:44.775058  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:49:44.775065  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:49:44.775134  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:49:44.803293  443658 cri.go:89] found id: ""
	I1014 19:49:44.803309  443658 logs.go:282] 0 containers: []
	W1014 19:49:44.803317  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:49:44.803326  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:49:44.803340  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:49:44.875474  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:49:44.875500  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:49:44.894197  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:49:44.894221  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:49:44.953777  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:49:44.946510   11323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:44.947021   11323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:44.948628   11323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:44.949193   11323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:44.950677   11323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:49:44.946510   11323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:44.947021   11323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:44.948628   11323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:44.949193   11323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:44.950677   11323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:49:44.953793  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:49:44.953807  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:49:45.014704  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:49:45.014730  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:49:47.548453  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:49:47.559665  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:49:47.559718  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:49:47.585634  443658 cri.go:89] found id: ""
	I1014 19:49:47.585654  443658 logs.go:282] 0 containers: []
	W1014 19:49:47.585664  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:49:47.585671  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:49:47.585770  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:49:47.613859  443658 cri.go:89] found id: ""
	I1014 19:49:47.613878  443658 logs.go:282] 0 containers: []
	W1014 19:49:47.613888  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:49:47.613894  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:49:47.613973  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:49:47.644468  443658 cri.go:89] found id: ""
	I1014 19:49:47.644489  443658 logs.go:282] 0 containers: []
	W1014 19:49:47.644498  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:49:47.644504  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:49:47.644577  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:49:47.673671  443658 cri.go:89] found id: ""
	I1014 19:49:47.673689  443658 logs.go:282] 0 containers: []
	W1014 19:49:47.673700  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:49:47.673708  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:49:47.673794  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:49:47.702597  443658 cri.go:89] found id: ""
	I1014 19:49:47.702613  443658 logs.go:282] 0 containers: []
	W1014 19:49:47.702621  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:49:47.702626  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:49:47.702687  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:49:47.729519  443658 cri.go:89] found id: ""
	I1014 19:49:47.729535  443658 logs.go:282] 0 containers: []
	W1014 19:49:47.729542  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:49:47.729546  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:49:47.729594  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:49:47.757807  443658 cri.go:89] found id: ""
	I1014 19:49:47.757824  443658 logs.go:282] 0 containers: []
	W1014 19:49:47.757831  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:49:47.757839  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:49:47.757853  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:49:47.829770  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:49:47.829807  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:49:47.848287  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:49:47.848311  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:49:47.906512  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:49:47.898946   11456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:47.899539   11456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:47.901229   11456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:47.901705   11456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:47.903277   11456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:49:47.898946   11456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:47.899539   11456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:47.901229   11456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:47.901705   11456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:47.903277   11456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:49:47.906525  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:49:47.906537  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:49:47.971102  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:49:47.971128  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:49:50.502817  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:49:50.514425  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:49:50.514473  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:49:50.541600  443658 cri.go:89] found id: ""
	I1014 19:49:50.541620  443658 logs.go:282] 0 containers: []
	W1014 19:49:50.541631  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:49:50.541637  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:49:50.541689  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:49:50.569005  443658 cri.go:89] found id: ""
	I1014 19:49:50.569032  443658 logs.go:282] 0 containers: []
	W1014 19:49:50.569041  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:49:50.569049  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:49:50.569121  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:49:50.597051  443658 cri.go:89] found id: ""
	I1014 19:49:50.597068  443658 logs.go:282] 0 containers: []
	W1014 19:49:50.597075  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:49:50.597079  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:49:50.597137  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:49:50.626382  443658 cri.go:89] found id: ""
	I1014 19:49:50.626405  443658 logs.go:282] 0 containers: []
	W1014 19:49:50.626412  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:49:50.626419  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:49:50.626473  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:49:50.654979  443658 cri.go:89] found id: ""
	I1014 19:49:50.654996  443658 logs.go:282] 0 containers: []
	W1014 19:49:50.655004  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:49:50.655008  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:49:50.655078  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:49:50.683528  443658 cri.go:89] found id: ""
	I1014 19:49:50.683548  443658 logs.go:282] 0 containers: []
	W1014 19:49:50.683558  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:49:50.683565  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:49:50.683618  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:49:50.711499  443658 cri.go:89] found id: ""
	I1014 19:49:50.711517  443658 logs.go:282] 0 containers: []
	W1014 19:49:50.711527  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:49:50.711537  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:49:50.711549  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:49:50.778199  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:49:50.778225  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:49:50.796226  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:49:50.796248  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:49:50.854616  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:49:50.846701   11581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:50.848209   11581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:50.848680   11581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:50.850246   11581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:50.850635   11581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:49:50.846701   11581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:50.848209   11581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:50.848680   11581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:50.850246   11581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:50.850635   11581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:49:50.854631  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:49:50.854643  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:49:50.918886  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:49:50.918914  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:49:53.451878  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:49:53.463151  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:49:53.463203  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:49:53.489474  443658 cri.go:89] found id: ""
	I1014 19:49:53.489490  443658 logs.go:282] 0 containers: []
	W1014 19:49:53.489499  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:49:53.489506  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:49:53.489568  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:49:53.516620  443658 cri.go:89] found id: ""
	I1014 19:49:53.516638  443658 logs.go:282] 0 containers: []
	W1014 19:49:53.516649  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:49:53.516656  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:49:53.516712  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:49:53.543251  443658 cri.go:89] found id: ""
	I1014 19:49:53.543270  443658 logs.go:282] 0 containers: []
	W1014 19:49:53.543281  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:49:53.543287  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:49:53.543354  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:49:53.570736  443658 cri.go:89] found id: ""
	I1014 19:49:53.570769  443658 logs.go:282] 0 containers: []
	W1014 19:49:53.570779  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:49:53.570786  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:49:53.570840  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:49:53.598355  443658 cri.go:89] found id: ""
	I1014 19:49:53.598372  443658 logs.go:282] 0 containers: []
	W1014 19:49:53.598381  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:49:53.598387  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:49:53.598450  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:49:53.625505  443658 cri.go:89] found id: ""
	I1014 19:49:53.625524  443658 logs.go:282] 0 containers: []
	W1014 19:49:53.625535  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:49:53.625542  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:49:53.625592  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:49:53.654789  443658 cri.go:89] found id: ""
	I1014 19:49:53.654808  443658 logs.go:282] 0 containers: []
	W1014 19:49:53.654815  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:49:53.654823  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:49:53.654839  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:49:53.726281  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:49:53.726306  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:49:53.744456  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:49:53.744480  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:49:53.804344  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:49:53.796970   11702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:53.797615   11702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:53.799272   11702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:53.799836   11702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:53.800930   11702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:49:53.796970   11702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:53.797615   11702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:53.799272   11702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:53.799836   11702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:53.800930   11702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:49:53.804365  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:49:53.804378  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:49:53.864148  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:49:53.864174  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:49:56.397395  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:49:56.408940  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:49:56.408994  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:49:56.436261  443658 cri.go:89] found id: ""
	I1014 19:49:56.436277  443658 logs.go:282] 0 containers: []
	W1014 19:49:56.436284  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:49:56.436291  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:49:56.436343  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:49:56.464497  443658 cri.go:89] found id: ""
	I1014 19:49:56.464514  443658 logs.go:282] 0 containers: []
	W1014 19:49:56.464523  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:49:56.464529  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:49:56.464584  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:49:56.492551  443658 cri.go:89] found id: ""
	I1014 19:49:56.492573  443658 logs.go:282] 0 containers: []
	W1014 19:49:56.492580  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:49:56.492585  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:49:56.492634  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:49:56.519631  443658 cri.go:89] found id: ""
	I1014 19:49:56.519650  443658 logs.go:282] 0 containers: []
	W1014 19:49:56.519661  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:49:56.519667  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:49:56.519716  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:49:56.545245  443658 cri.go:89] found id: ""
	I1014 19:49:56.545262  443658 logs.go:282] 0 containers: []
	W1014 19:49:56.545269  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:49:56.545274  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:49:56.545322  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:49:56.572677  443658 cri.go:89] found id: ""
	I1014 19:49:56.572700  443658 logs.go:282] 0 containers: []
	W1014 19:49:56.572711  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:49:56.572718  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:49:56.572795  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:49:56.601136  443658 cri.go:89] found id: ""
	I1014 19:49:56.601156  443658 logs.go:282] 0 containers: []
	W1014 19:49:56.601167  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:49:56.601178  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:49:56.601192  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:49:56.666034  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:49:56.666060  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:49:56.698200  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:49:56.698222  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:49:56.767958  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:49:56.767983  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:49:56.786835  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:49:56.786860  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:49:56.845436  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:49:56.837911   11844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:56.838400   11844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:56.840026   11844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:56.840573   11844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:56.842214   11844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:49:56.837911   11844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:56.838400   11844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:56.840026   11844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:56.840573   11844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:56.842214   11844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:49:59.347179  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:49:59.358660  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:49:59.358711  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:49:59.387000  443658 cri.go:89] found id: ""
	I1014 19:49:59.387027  443658 logs.go:282] 0 containers: []
	W1014 19:49:59.387034  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:49:59.387040  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:49:59.387088  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:49:59.414823  443658 cri.go:89] found id: ""
	I1014 19:49:59.414840  443658 logs.go:282] 0 containers: []
	W1014 19:49:59.414847  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:49:59.414852  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:49:59.414912  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:49:59.442607  443658 cri.go:89] found id: ""
	I1014 19:49:59.442624  443658 logs.go:282] 0 containers: []
	W1014 19:49:59.442631  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:49:59.442636  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:49:59.442696  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:49:59.471821  443658 cri.go:89] found id: ""
	I1014 19:49:59.471846  443658 logs.go:282] 0 containers: []
	W1014 19:49:59.471856  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:49:59.471864  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:49:59.471937  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:49:59.498236  443658 cri.go:89] found id: ""
	I1014 19:49:59.498256  443658 logs.go:282] 0 containers: []
	W1014 19:49:59.498263  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:49:59.498268  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:49:59.498316  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:49:59.525020  443658 cri.go:89] found id: ""
	I1014 19:49:59.525039  443658 logs.go:282] 0 containers: []
	W1014 19:49:59.525046  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:49:59.525051  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:49:59.525101  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:49:59.551137  443658 cri.go:89] found id: ""
	I1014 19:49:59.551157  443658 logs.go:282] 0 containers: []
	W1014 19:49:59.551167  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:49:59.551180  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:49:59.551192  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:49:59.622834  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:49:59.622862  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:49:59.641369  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:49:59.641392  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:49:59.701545  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:49:59.694218   11954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:59.694838   11954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:59.696377   11954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:59.696859   11954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:59.698400   11954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:49:59.694218   11954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:59.694838   11954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:59.696377   11954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:59.696859   11954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:59.698400   11954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:49:59.701565  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:49:59.701623  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:49:59.765745  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:49:59.765773  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:50:02.298114  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:50:02.309805  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:50:02.309861  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:50:02.337973  443658 cri.go:89] found id: ""
	I1014 19:50:02.337989  443658 logs.go:282] 0 containers: []
	W1014 19:50:02.337996  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:50:02.338001  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:50:02.338069  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:50:02.366907  443658 cri.go:89] found id: ""
	I1014 19:50:02.366925  443658 logs.go:282] 0 containers: []
	W1014 19:50:02.366933  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:50:02.366938  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:50:02.366996  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:50:02.394409  443658 cri.go:89] found id: ""
	I1014 19:50:02.394427  443658 logs.go:282] 0 containers: []
	W1014 19:50:02.394437  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:50:02.394445  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:50:02.394507  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:50:02.423803  443658 cri.go:89] found id: ""
	I1014 19:50:02.423825  443658 logs.go:282] 0 containers: []
	W1014 19:50:02.423835  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:50:02.423841  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:50:02.423894  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:50:02.453316  443658 cri.go:89] found id: ""
	I1014 19:50:02.453346  443658 logs.go:282] 0 containers: []
	W1014 19:50:02.453357  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:50:02.453363  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:50:02.453429  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:50:02.480872  443658 cri.go:89] found id: ""
	I1014 19:50:02.480901  443658 logs.go:282] 0 containers: []
	W1014 19:50:02.480911  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:50:02.480917  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:50:02.480981  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:50:02.508491  443658 cri.go:89] found id: ""
	I1014 19:50:02.508513  443658 logs.go:282] 0 containers: []
	W1014 19:50:02.508520  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:50:02.508530  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:50:02.508545  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:50:02.538904  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:50:02.538926  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:50:02.604250  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:50:02.604276  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:50:02.624221  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:50:02.624244  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:50:02.686637  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:50:02.678751   12105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:02.679376   12105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:02.681040   12105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:02.681562   12105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:02.683182   12105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:50:02.678751   12105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:02.679376   12105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:02.681040   12105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:02.681562   12105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:02.683182   12105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:50:02.686653  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:50:02.686670  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:50:05.248160  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:50:05.259486  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:50:05.259543  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:50:05.287245  443658 cri.go:89] found id: ""
	I1014 19:50:05.287266  443658 logs.go:282] 0 containers: []
	W1014 19:50:05.287277  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:50:05.287283  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:50:05.287337  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:50:05.316262  443658 cri.go:89] found id: ""
	I1014 19:50:05.316281  443658 logs.go:282] 0 containers: []
	W1014 19:50:05.316292  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:50:05.316298  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:50:05.316357  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:50:05.345733  443658 cri.go:89] found id: ""
	I1014 19:50:05.345767  443658 logs.go:282] 0 containers: []
	W1014 19:50:05.345779  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:50:05.345786  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:50:05.345842  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:50:05.373802  443658 cri.go:89] found id: ""
	I1014 19:50:05.373821  443658 logs.go:282] 0 containers: []
	W1014 19:50:05.373832  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:50:05.373840  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:50:05.373907  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:50:05.401831  443658 cri.go:89] found id: ""
	I1014 19:50:05.401849  443658 logs.go:282] 0 containers: []
	W1014 19:50:05.401856  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:50:05.401861  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:50:05.401915  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:50:05.430126  443658 cri.go:89] found id: ""
	I1014 19:50:05.430148  443658 logs.go:282] 0 containers: []
	W1014 19:50:05.430160  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:50:05.430167  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:50:05.430238  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:50:05.459121  443658 cri.go:89] found id: ""
	I1014 19:50:05.459139  443658 logs.go:282] 0 containers: []
	W1014 19:50:05.459146  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:50:05.459154  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:50:05.459166  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:50:05.519744  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:50:05.512669   12206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:05.513219   12206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:05.514764   12206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:05.515265   12206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:05.516363   12206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:50:05.512669   12206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:05.513219   12206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:05.514764   12206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:05.515265   12206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:05.516363   12206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:50:05.519777  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:50:05.519791  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:50:05.584599  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:50:05.584627  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:50:05.617086  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:50:05.617104  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:50:05.684896  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:50:05.684924  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:50:08.207248  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:50:08.218426  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:50:08.218487  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:50:08.245002  443658 cri.go:89] found id: ""
	I1014 19:50:08.245023  443658 logs.go:282] 0 containers: []
	W1014 19:50:08.245032  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:50:08.245038  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:50:08.245101  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:50:08.273388  443658 cri.go:89] found id: ""
	I1014 19:50:08.273404  443658 logs.go:282] 0 containers: []
	W1014 19:50:08.273411  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:50:08.273415  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:50:08.273470  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:50:08.301943  443658 cri.go:89] found id: ""
	I1014 19:50:08.301959  443658 logs.go:282] 0 containers: []
	W1014 19:50:08.301966  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:50:08.301971  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:50:08.302030  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:50:08.328569  443658 cri.go:89] found id: ""
	I1014 19:50:08.328587  443658 logs.go:282] 0 containers: []
	W1014 19:50:08.328594  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:50:08.328599  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:50:08.328649  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:50:08.356010  443658 cri.go:89] found id: ""
	I1014 19:50:08.356028  443658 logs.go:282] 0 containers: []
	W1014 19:50:08.356036  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:50:08.356042  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:50:08.356095  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:50:08.383392  443658 cri.go:89] found id: ""
	I1014 19:50:08.383407  443658 logs.go:282] 0 containers: []
	W1014 19:50:08.383414  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:50:08.383419  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:50:08.383469  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:50:08.410636  443658 cri.go:89] found id: ""
	I1014 19:50:08.410653  443658 logs.go:282] 0 containers: []
	W1014 19:50:08.410659  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:50:08.410667  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:50:08.410679  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:50:08.441110  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:50:08.441129  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:50:08.506036  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:50:08.506060  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:50:08.524075  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:50:08.524094  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:50:08.583708  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:50:08.576429   12348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:08.576973   12348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:08.578510   12348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:08.579066   12348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:08.580610   12348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:50:08.576429   12348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:08.576973   12348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:08.578510   12348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:08.579066   12348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:08.580610   12348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:50:08.583720  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:50:08.583740  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:50:11.145672  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:50:11.157553  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:50:11.157615  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:50:11.186767  443658 cri.go:89] found id: ""
	I1014 19:50:11.186787  443658 logs.go:282] 0 containers: []
	W1014 19:50:11.186794  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:50:11.186799  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:50:11.186858  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:50:11.216248  443658 cri.go:89] found id: ""
	I1014 19:50:11.216265  443658 logs.go:282] 0 containers: []
	W1014 19:50:11.216273  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:50:11.216278  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:50:11.216326  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:50:11.244352  443658 cri.go:89] found id: ""
	I1014 19:50:11.244375  443658 logs.go:282] 0 containers: []
	W1014 19:50:11.244384  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:50:11.244390  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:50:11.244457  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:50:11.271891  443658 cri.go:89] found id: ""
	I1014 19:50:11.271908  443658 logs.go:282] 0 containers: []
	W1014 19:50:11.271915  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:50:11.271920  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:50:11.271973  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:50:11.300619  443658 cri.go:89] found id: ""
	I1014 19:50:11.300635  443658 logs.go:282] 0 containers: []
	W1014 19:50:11.300642  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:50:11.300647  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:50:11.300724  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:50:11.327778  443658 cri.go:89] found id: ""
	I1014 19:50:11.327797  443658 logs.go:282] 0 containers: []
	W1014 19:50:11.327804  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:50:11.327809  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:50:11.327856  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:50:11.356398  443658 cri.go:89] found id: ""
	I1014 19:50:11.356416  443658 logs.go:282] 0 containers: []
	W1014 19:50:11.356425  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:50:11.356435  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:50:11.356448  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:50:11.387147  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:50:11.387172  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:50:11.456903  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:50:11.456928  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:50:11.475336  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:50:11.475358  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:50:11.533524  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:50:11.526103   12467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:11.526626   12467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:11.528173   12467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:11.528651   12467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:11.530139   12467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:50:11.526103   12467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:11.526626   12467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:11.528173   12467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:11.528651   12467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:11.530139   12467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:50:11.533537  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:50:11.533549  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:50:14.099433  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:50:14.110822  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:50:14.110894  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:50:14.137081  443658 cri.go:89] found id: ""
	I1014 19:50:14.137099  443658 logs.go:282] 0 containers: []
	W1014 19:50:14.137108  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:50:14.137115  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:50:14.137180  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:50:14.165873  443658 cri.go:89] found id: ""
	I1014 19:50:14.165893  443658 logs.go:282] 0 containers: []
	W1014 19:50:14.165917  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:50:14.165924  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:50:14.165991  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:50:14.194062  443658 cri.go:89] found id: ""
	I1014 19:50:14.194082  443658 logs.go:282] 0 containers: []
	W1014 19:50:14.194091  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:50:14.194098  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:50:14.194163  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:50:14.222120  443658 cri.go:89] found id: ""
	I1014 19:50:14.222139  443658 logs.go:282] 0 containers: []
	W1014 19:50:14.222149  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:50:14.222156  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:50:14.222239  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:50:14.249411  443658 cri.go:89] found id: ""
	I1014 19:50:14.249430  443658 logs.go:282] 0 containers: []
	W1014 19:50:14.249439  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:50:14.249444  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:50:14.249517  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:50:14.276644  443658 cri.go:89] found id: ""
	I1014 19:50:14.276661  443658 logs.go:282] 0 containers: []
	W1014 19:50:14.276668  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:50:14.276673  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:50:14.276723  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:50:14.305269  443658 cri.go:89] found id: ""
	I1014 19:50:14.305287  443658 logs.go:282] 0 containers: []
	W1014 19:50:14.305297  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:50:14.305308  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:50:14.305323  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:50:14.335633  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:50:14.335650  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:50:14.407263  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:50:14.407297  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:50:14.425952  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:50:14.425975  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:50:14.484783  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:50:14.477581   12590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:14.478203   12590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:14.479661   12590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:14.480126   12590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:14.481572   12590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:50:14.477581   12590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:14.478203   12590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:14.479661   12590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:14.480126   12590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:14.481572   12590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:50:14.484800  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:50:14.484815  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:50:17.050537  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:50:17.062166  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:50:17.062228  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:50:17.089863  443658 cri.go:89] found id: ""
	I1014 19:50:17.089883  443658 logs.go:282] 0 containers: []
	W1014 19:50:17.089893  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:50:17.089900  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:50:17.089956  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:50:17.118126  443658 cri.go:89] found id: ""
	I1014 19:50:17.118146  443658 logs.go:282] 0 containers: []
	W1014 19:50:17.118153  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:50:17.118160  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:50:17.118211  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:50:17.145473  443658 cri.go:89] found id: ""
	I1014 19:50:17.145493  443658 logs.go:282] 0 containers: []
	W1014 19:50:17.145504  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:50:17.145511  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:50:17.145563  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:50:17.173278  443658 cri.go:89] found id: ""
	I1014 19:50:17.173297  443658 logs.go:282] 0 containers: []
	W1014 19:50:17.173305  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:50:17.173310  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:50:17.173364  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:50:17.200155  443658 cri.go:89] found id: ""
	I1014 19:50:17.200175  443658 logs.go:282] 0 containers: []
	W1014 19:50:17.200183  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:50:17.200189  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:50:17.200259  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:50:17.227022  443658 cri.go:89] found id: ""
	I1014 19:50:17.227039  443658 logs.go:282] 0 containers: []
	W1014 19:50:17.227046  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:50:17.227051  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:50:17.227097  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:50:17.252693  443658 cri.go:89] found id: ""
	I1014 19:50:17.252711  443658 logs.go:282] 0 containers: []
	W1014 19:50:17.252719  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:50:17.252730  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:50:17.252771  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:50:17.284340  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:50:17.284358  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:50:17.350087  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:50:17.350110  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:50:17.367795  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:50:17.367815  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:50:17.426270  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:50:17.419190   12729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:17.419650   12729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:17.421295   12729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:17.421842   12729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:17.423058   12729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:50:17.419190   12729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:17.419650   12729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:17.421295   12729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:17.421842   12729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:17.423058   12729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:50:17.426290  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:50:17.426300  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:50:19.990063  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:50:20.001404  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:50:20.001462  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:50:20.029335  443658 cri.go:89] found id: ""
	I1014 19:50:20.029356  443658 logs.go:282] 0 containers: []
	W1014 19:50:20.029365  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:50:20.029371  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:50:20.029418  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:50:20.056226  443658 cri.go:89] found id: ""
	I1014 19:50:20.056244  443658 logs.go:282] 0 containers: []
	W1014 19:50:20.056251  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:50:20.056256  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:50:20.056303  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:50:20.085632  443658 cri.go:89] found id: ""
	I1014 19:50:20.085651  443658 logs.go:282] 0 containers: []
	W1014 19:50:20.085666  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:50:20.085674  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:50:20.085738  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:50:20.113679  443658 cri.go:89] found id: ""
	I1014 19:50:20.113699  443658 logs.go:282] 0 containers: []
	W1014 19:50:20.113717  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:50:20.113723  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:50:20.113793  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:50:20.141622  443658 cri.go:89] found id: ""
	I1014 19:50:20.141640  443658 logs.go:282] 0 containers: []
	W1014 19:50:20.141647  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:50:20.141651  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:50:20.141733  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:50:20.170013  443658 cri.go:89] found id: ""
	I1014 19:50:20.170032  443658 logs.go:282] 0 containers: []
	W1014 19:50:20.170042  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:50:20.170049  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:50:20.170106  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:50:20.198748  443658 cri.go:89] found id: ""
	I1014 19:50:20.198785  443658 logs.go:282] 0 containers: []
	W1014 19:50:20.198795  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:50:20.198806  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:50:20.198818  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:50:20.216706  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:50:20.216728  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:50:20.275300  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:50:20.267702   12828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:20.268302   12828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:20.269917   12828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:20.270346   12828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:20.272061   12828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:50:20.267702   12828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:20.268302   12828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:20.269917   12828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:20.270346   12828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:20.272061   12828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:50:20.275316  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:50:20.275329  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:50:20.340712  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:50:20.340738  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:50:20.371777  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:50:20.371799  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:50:22.939903  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:50:22.951439  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:50:22.951487  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:50:22.978695  443658 cri.go:89] found id: ""
	I1014 19:50:22.978715  443658 logs.go:282] 0 containers: []
	W1014 19:50:22.978725  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:50:22.978732  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:50:22.978808  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:50:23.005937  443658 cri.go:89] found id: ""
	I1014 19:50:23.005959  443658 logs.go:282] 0 containers: []
	W1014 19:50:23.005971  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:50:23.005978  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:50:23.006032  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:50:23.032228  443658 cri.go:89] found id: ""
	I1014 19:50:23.032247  443658 logs.go:282] 0 containers: []
	W1014 19:50:23.032257  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:50:23.032264  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:50:23.032330  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:50:23.059407  443658 cri.go:89] found id: ""
	I1014 19:50:23.059424  443658 logs.go:282] 0 containers: []
	W1014 19:50:23.059436  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:50:23.059450  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:50:23.059503  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:50:23.087490  443658 cri.go:89] found id: ""
	I1014 19:50:23.087508  443658 logs.go:282] 0 containers: []
	W1014 19:50:23.087518  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:50:23.087524  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:50:23.087588  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:50:23.116625  443658 cri.go:89] found id: ""
	I1014 19:50:23.116642  443658 logs.go:282] 0 containers: []
	W1014 19:50:23.116649  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:50:23.116654  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:50:23.116699  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:50:23.145362  443658 cri.go:89] found id: ""
	I1014 19:50:23.145379  443658 logs.go:282] 0 containers: []
	W1014 19:50:23.145388  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:50:23.145399  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:50:23.145410  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:50:23.210392  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:50:23.210420  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:50:23.242258  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:50:23.242277  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:50:23.309159  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:50:23.309186  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:50:23.327723  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:50:23.327744  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:50:23.386750  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:50:23.379457   12965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:23.380034   12965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:23.381688   12965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:23.382198   12965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:23.383449   12965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:50:23.379457   12965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:23.380034   12965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:23.381688   12965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:23.382198   12965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:23.383449   12965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:50:25.887778  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:50:25.899287  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:50:25.899359  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:50:25.928125  443658 cri.go:89] found id: ""
	I1014 19:50:25.928146  443658 logs.go:282] 0 containers: []
	W1014 19:50:25.928156  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:50:25.928162  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:50:25.928212  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:50:25.957045  443658 cri.go:89] found id: ""
	I1014 19:50:25.957061  443658 logs.go:282] 0 containers: []
	W1014 19:50:25.957068  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:50:25.957073  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:50:25.957126  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:50:25.984205  443658 cri.go:89] found id: ""
	I1014 19:50:25.984228  443658 logs.go:282] 0 containers: []
	W1014 19:50:25.984237  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:50:25.984243  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:50:25.984289  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:50:26.012054  443658 cri.go:89] found id: ""
	I1014 19:50:26.012071  443658 logs.go:282] 0 containers: []
	W1014 19:50:26.012078  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:50:26.012082  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:50:26.012128  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:50:26.040304  443658 cri.go:89] found id: ""
	I1014 19:50:26.040321  443658 logs.go:282] 0 containers: []
	W1014 19:50:26.040328  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:50:26.040332  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:50:26.040392  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:50:26.066676  443658 cri.go:89] found id: ""
	I1014 19:50:26.066696  443658 logs.go:282] 0 containers: []
	W1014 19:50:26.066705  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:50:26.066712  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:50:26.066787  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:50:26.094653  443658 cri.go:89] found id: ""
	I1014 19:50:26.094674  443658 logs.go:282] 0 containers: []
	W1014 19:50:26.094684  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:50:26.094693  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:50:26.094704  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:50:26.124447  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:50:26.124465  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:50:26.195983  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:50:26.196006  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:50:26.214895  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:50:26.214917  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:50:26.275196  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:50:26.267636   13087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:26.268258   13087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:26.269963   13087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:26.270471   13087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:26.272090   13087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:50:26.267636   13087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:26.268258   13087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:26.269963   13087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:26.270471   13087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:26.272090   13087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:50:26.275208  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:50:26.275223  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:50:28.837202  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:50:28.848579  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:50:28.848634  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:50:28.875162  443658 cri.go:89] found id: ""
	I1014 19:50:28.875182  443658 logs.go:282] 0 containers: []
	W1014 19:50:28.875194  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:50:28.875200  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:50:28.875254  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:50:28.903438  443658 cri.go:89] found id: ""
	I1014 19:50:28.903455  443658 logs.go:282] 0 containers: []
	W1014 19:50:28.903462  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:50:28.903467  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:50:28.903520  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:50:28.931290  443658 cri.go:89] found id: ""
	I1014 19:50:28.931307  443658 logs.go:282] 0 containers: []
	W1014 19:50:28.931314  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:50:28.931319  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:50:28.931365  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:50:28.958813  443658 cri.go:89] found id: ""
	I1014 19:50:28.958831  443658 logs.go:282] 0 containers: []
	W1014 19:50:28.958838  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:50:28.958843  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:50:28.958894  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:50:28.984686  443658 cri.go:89] found id: ""
	I1014 19:50:28.984704  443658 logs.go:282] 0 containers: []
	W1014 19:50:28.984711  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:50:28.984718  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:50:28.984783  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:50:29.012142  443658 cri.go:89] found id: ""
	I1014 19:50:29.012161  443658 logs.go:282] 0 containers: []
	W1014 19:50:29.012172  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:50:29.012183  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:50:29.012238  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:50:29.038850  443658 cri.go:89] found id: ""
	I1014 19:50:29.038870  443658 logs.go:282] 0 containers: []
	W1014 19:50:29.038880  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:50:29.038891  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:50:29.038902  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:50:29.069928  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:50:29.069967  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:50:29.138190  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:50:29.138214  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:50:29.156875  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:50:29.156904  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:50:29.216410  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:50:29.208955   13215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:29.209524   13215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:29.211285   13215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:29.211710   13215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:29.213259   13215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:50:29.208955   13215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:29.209524   13215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:29.211285   13215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:29.211710   13215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:29.213259   13215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:50:29.216425  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:50:29.216442  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:50:31.781917  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:50:31.793447  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:50:31.793505  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:50:31.821136  443658 cri.go:89] found id: ""
	I1014 19:50:31.821153  443658 logs.go:282] 0 containers: []
	W1014 19:50:31.821160  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:50:31.821165  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:50:31.821214  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:50:31.849490  443658 cri.go:89] found id: ""
	I1014 19:50:31.849508  443658 logs.go:282] 0 containers: []
	W1014 19:50:31.849515  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:50:31.849520  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:50:31.849573  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:50:31.876743  443658 cri.go:89] found id: ""
	I1014 19:50:31.876777  443658 logs.go:282] 0 containers: []
	W1014 19:50:31.876785  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:50:31.876790  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:50:31.876842  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:50:31.905558  443658 cri.go:89] found id: ""
	I1014 19:50:31.905576  443658 logs.go:282] 0 containers: []
	W1014 19:50:31.905584  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:50:31.905591  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:50:31.905654  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:50:31.934155  443658 cri.go:89] found id: ""
	I1014 19:50:31.934174  443658 logs.go:282] 0 containers: []
	W1014 19:50:31.934185  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:50:31.934191  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:50:31.934252  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:50:31.961840  443658 cri.go:89] found id: ""
	I1014 19:50:31.961857  443658 logs.go:282] 0 containers: []
	W1014 19:50:31.961870  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:50:31.961875  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:50:31.961924  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:50:31.989285  443658 cri.go:89] found id: ""
	I1014 19:50:31.989306  443658 logs.go:282] 0 containers: []
	W1014 19:50:31.989317  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:50:31.989330  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:50:31.989341  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:50:32.061358  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:50:32.061382  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:50:32.080223  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:50:32.080243  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:50:32.142648  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:50:32.134637   13329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:32.135263   13329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:32.137075   13329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:32.137669   13329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:32.139334   13329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:50:32.134637   13329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:32.135263   13329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:32.137075   13329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:32.137669   13329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:32.139334   13329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:50:32.142684  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:50:32.142699  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:50:32.209500  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:50:32.209528  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:50:34.742153  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:50:34.753291  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:50:34.753345  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:50:34.781021  443658 cri.go:89] found id: ""
	I1014 19:50:34.781038  443658 logs.go:282] 0 containers: []
	W1014 19:50:34.781045  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:50:34.781050  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:50:34.781097  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:50:34.807324  443658 cri.go:89] found id: ""
	I1014 19:50:34.807341  443658 logs.go:282] 0 containers: []
	W1014 19:50:34.807349  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:50:34.807354  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:50:34.807402  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:50:34.834727  443658 cri.go:89] found id: ""
	I1014 19:50:34.834748  443658 logs.go:282] 0 containers: []
	W1014 19:50:34.834771  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:50:34.834778  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:50:34.834833  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:50:34.861999  443658 cri.go:89] found id: ""
	I1014 19:50:34.862019  443658 logs.go:282] 0 containers: []
	W1014 19:50:34.862031  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:50:34.862037  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:50:34.862087  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:50:34.889667  443658 cri.go:89] found id: ""
	I1014 19:50:34.889684  443658 logs.go:282] 0 containers: []
	W1014 19:50:34.889690  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:50:34.889694  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:50:34.889742  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:50:34.916811  443658 cri.go:89] found id: ""
	I1014 19:50:34.916828  443658 logs.go:282] 0 containers: []
	W1014 19:50:34.916834  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:50:34.916840  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:50:34.916899  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:50:34.944926  443658 cri.go:89] found id: ""
	I1014 19:50:34.944943  443658 logs.go:282] 0 containers: []
	W1014 19:50:34.944951  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:50:34.944959  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:50:34.944973  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:50:35.013004  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:50:35.013029  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:50:35.030877  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:50:35.030903  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:50:35.089384  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:50:35.081483   13442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:35.082170   13442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:35.083809   13442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:35.084270   13442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:35.085889   13442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:50:35.081483   13442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:35.082170   13442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:35.083809   13442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:35.084270   13442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:35.085889   13442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:50:35.089398  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:50:35.089409  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:50:35.149874  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:50:35.149899  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:50:37.684070  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:50:37.695415  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:50:37.695469  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:50:37.723582  443658 cri.go:89] found id: ""
	I1014 19:50:37.723598  443658 logs.go:282] 0 containers: []
	W1014 19:50:37.723605  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:50:37.723611  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:50:37.723688  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:50:37.751328  443658 cri.go:89] found id: ""
	I1014 19:50:37.751347  443658 logs.go:282] 0 containers: []
	W1014 19:50:37.751354  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:50:37.751363  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:50:37.751410  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:50:37.779279  443658 cri.go:89] found id: ""
	I1014 19:50:37.779300  443658 logs.go:282] 0 containers: []
	W1014 19:50:37.779311  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:50:37.779317  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:50:37.779392  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:50:37.806937  443658 cri.go:89] found id: ""
	I1014 19:50:37.806954  443658 logs.go:282] 0 containers: []
	W1014 19:50:37.806974  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:50:37.806979  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:50:37.807028  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:50:37.834418  443658 cri.go:89] found id: ""
	I1014 19:50:37.834435  443658 logs.go:282] 0 containers: []
	W1014 19:50:37.834442  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:50:37.834447  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:50:37.834495  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:50:37.861687  443658 cri.go:89] found id: ""
	I1014 19:50:37.861705  443658 logs.go:282] 0 containers: []
	W1014 19:50:37.861712  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:50:37.861719  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:50:37.861791  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:50:37.889605  443658 cri.go:89] found id: ""
	I1014 19:50:37.889622  443658 logs.go:282] 0 containers: []
	W1014 19:50:37.889628  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:50:37.889637  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:50:37.889648  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:50:37.954899  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:50:37.954928  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:50:37.988108  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:50:37.988128  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:50:38.058132  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:50:38.058158  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:50:38.076773  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:50:38.076795  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:50:38.135957  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:50:38.127889   13576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:38.128350   13576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:38.130577   13576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:38.131078   13576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:38.132629   13576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:50:38.127889   13576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:38.128350   13576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:38.130577   13576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:38.131078   13576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:38.132629   13576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:50:40.636752  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:50:40.647999  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:50:40.648055  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:50:40.674081  443658 cri.go:89] found id: ""
	I1014 19:50:40.674099  443658 logs.go:282] 0 containers: []
	W1014 19:50:40.674107  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:50:40.674112  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:50:40.674160  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:50:40.701160  443658 cri.go:89] found id: ""
	I1014 19:50:40.701177  443658 logs.go:282] 0 containers: []
	W1014 19:50:40.701184  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:50:40.701189  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:50:40.701252  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:50:40.728441  443658 cri.go:89] found id: ""
	I1014 19:50:40.728462  443658 logs.go:282] 0 containers: []
	W1014 19:50:40.728472  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:50:40.728480  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:50:40.728527  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:50:40.756302  443658 cri.go:89] found id: ""
	I1014 19:50:40.756318  443658 logs.go:282] 0 containers: []
	W1014 19:50:40.756325  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:50:40.756330  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:50:40.756375  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:50:40.782665  443658 cri.go:89] found id: ""
	I1014 19:50:40.782682  443658 logs.go:282] 0 containers: []
	W1014 19:50:40.782721  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:50:40.782727  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:50:40.782808  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:50:40.809993  443658 cri.go:89] found id: ""
	I1014 19:50:40.810011  443658 logs.go:282] 0 containers: []
	W1014 19:50:40.810017  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:50:40.810022  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:50:40.810081  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:50:40.837750  443658 cri.go:89] found id: ""
	I1014 19:50:40.837785  443658 logs.go:282] 0 containers: []
	W1014 19:50:40.837795  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:50:40.837805  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:50:40.837816  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:50:40.905565  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:50:40.905598  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:50:40.923794  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:50:40.923817  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:50:40.982479  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:50:40.975467   13681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:40.976110   13681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:40.977609   13681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:40.978094   13681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:40.979129   13681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:50:40.975467   13681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:40.976110   13681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:40.977609   13681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:40.978094   13681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:40.979129   13681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:50:40.982490  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:50:40.982503  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:50:41.043844  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:50:41.043869  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:50:43.575810  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:50:43.587076  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:50:43.587126  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:50:43.613973  443658 cri.go:89] found id: ""
	I1014 19:50:43.613992  443658 logs.go:282] 0 containers: []
	W1014 19:50:43.614001  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:50:43.614007  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:50:43.614062  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:50:43.641631  443658 cri.go:89] found id: ""
	I1014 19:50:43.641649  443658 logs.go:282] 0 containers: []
	W1014 19:50:43.641655  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:50:43.641662  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:50:43.641740  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:50:43.668838  443658 cri.go:89] found id: ""
	I1014 19:50:43.668853  443658 logs.go:282] 0 containers: []
	W1014 19:50:43.668860  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:50:43.668865  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:50:43.668912  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:50:43.696427  443658 cri.go:89] found id: ""
	I1014 19:50:43.696447  443658 logs.go:282] 0 containers: []
	W1014 19:50:43.696457  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:50:43.696464  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:50:43.696515  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:50:43.723629  443658 cri.go:89] found id: ""
	I1014 19:50:43.723646  443658 logs.go:282] 0 containers: []
	W1014 19:50:43.723652  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:50:43.723657  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:50:43.723738  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:50:43.750543  443658 cri.go:89] found id: ""
	I1014 19:50:43.750564  443658 logs.go:282] 0 containers: []
	W1014 19:50:43.750573  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:50:43.750579  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:50:43.750630  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:50:43.777077  443658 cri.go:89] found id: ""
	I1014 19:50:43.777094  443658 logs.go:282] 0 containers: []
	W1014 19:50:43.777100  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:50:43.777109  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:50:43.777123  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:50:43.847663  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:50:43.847745  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:50:43.865887  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:50:43.865906  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:50:43.924883  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:50:43.917622   13820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:43.918218   13820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:43.919830   13820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:43.920193   13820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:43.921570   13820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:50:43.917622   13820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:43.918218   13820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:43.919830   13820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:43.920193   13820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:43.921570   13820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:50:43.924899  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:50:43.924910  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:50:43.985909  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:50:43.985934  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:50:46.519152  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:50:46.530574  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:50:46.530626  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:50:46.557422  443658 cri.go:89] found id: ""
	I1014 19:50:46.557437  443658 logs.go:282] 0 containers: []
	W1014 19:50:46.557443  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:50:46.557448  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:50:46.557494  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:50:46.584670  443658 cri.go:89] found id: ""
	I1014 19:50:46.584690  443658 logs.go:282] 0 containers: []
	W1014 19:50:46.584699  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:50:46.584704  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:50:46.584777  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:50:46.611880  443658 cri.go:89] found id: ""
	I1014 19:50:46.611898  443658 logs.go:282] 0 containers: []
	W1014 19:50:46.611905  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:50:46.611912  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:50:46.611961  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:50:46.639343  443658 cri.go:89] found id: ""
	I1014 19:50:46.639358  443658 logs.go:282] 0 containers: []
	W1014 19:50:46.639365  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:50:46.639370  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:50:46.639420  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:50:46.667657  443658 cri.go:89] found id: ""
	I1014 19:50:46.667677  443658 logs.go:282] 0 containers: []
	W1014 19:50:46.667686  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:50:46.667693  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:50:46.667751  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:50:46.694195  443658 cri.go:89] found id: ""
	I1014 19:50:46.694218  443658 logs.go:282] 0 containers: []
	W1014 19:50:46.694228  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:50:46.694234  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:50:46.694288  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:50:46.723852  443658 cri.go:89] found id: ""
	I1014 19:50:46.723873  443658 logs.go:282] 0 containers: []
	W1014 19:50:46.723883  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:50:46.723893  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:50:46.723911  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:50:46.795594  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:50:46.795617  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:50:46.813986  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:50:46.814005  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:50:46.874107  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:50:46.866264   13938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:46.866806   13938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:46.868435   13938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:46.868992   13938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:46.870716   13938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:50:46.866264   13938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:46.866806   13938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:46.868435   13938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:46.868992   13938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:46.870716   13938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:50:46.874123  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:50:46.874137  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:50:46.939214  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:50:46.939239  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:50:49.472291  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:50:49.483645  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:50:49.483703  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:50:49.512485  443658 cri.go:89] found id: ""
	I1014 19:50:49.512508  443658 logs.go:282] 0 containers: []
	W1014 19:50:49.512519  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:50:49.512526  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:50:49.512579  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:50:49.541986  443658 cri.go:89] found id: ""
	I1014 19:50:49.542003  443658 logs.go:282] 0 containers: []
	W1014 19:50:49.542010  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:50:49.542015  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:50:49.542062  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:50:49.568820  443658 cri.go:89] found id: ""
	I1014 19:50:49.568837  443658 logs.go:282] 0 containers: []
	W1014 19:50:49.568843  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:50:49.568848  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:50:49.568904  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:50:49.595650  443658 cri.go:89] found id: ""
	I1014 19:50:49.595667  443658 logs.go:282] 0 containers: []
	W1014 19:50:49.595674  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:50:49.595679  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:50:49.595738  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:50:49.624580  443658 cri.go:89] found id: ""
	I1014 19:50:49.624597  443658 logs.go:282] 0 containers: []
	W1014 19:50:49.624604  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:50:49.624610  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:50:49.624668  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:50:49.651849  443658 cri.go:89] found id: ""
	I1014 19:50:49.651871  443658 logs.go:282] 0 containers: []
	W1014 19:50:49.651881  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:50:49.651888  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:50:49.651942  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:50:49.679343  443658 cri.go:89] found id: ""
	I1014 19:50:49.679361  443658 logs.go:282] 0 containers: []
	W1014 19:50:49.679369  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:50:49.679378  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:50:49.679390  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:50:49.710667  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:50:49.710688  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:50:49.779683  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:50:49.779708  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:50:49.797614  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:50:49.797632  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:50:49.858709  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:50:49.850102   14071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:49.850643   14071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:49.853179   14071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:49.853667   14071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:49.855254   14071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:50:49.850102   14071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:49.850643   14071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:49.853179   14071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:49.853667   14071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:49.855254   14071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:50:49.858721  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:50:49.858734  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:50:52.425201  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:50:52.437033  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:50:52.437091  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:50:52.464814  443658 cri.go:89] found id: ""
	I1014 19:50:52.464835  443658 logs.go:282] 0 containers: []
	W1014 19:50:52.464845  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:50:52.464852  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:50:52.464920  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:50:52.493108  443658 cri.go:89] found id: ""
	I1014 19:50:52.493128  443658 logs.go:282] 0 containers: []
	W1014 19:50:52.493141  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:50:52.493147  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:50:52.493206  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:50:52.520875  443658 cri.go:89] found id: ""
	I1014 19:50:52.520896  443658 logs.go:282] 0 containers: []
	W1014 19:50:52.520905  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:50:52.520912  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:50:52.520971  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:50:52.548477  443658 cri.go:89] found id: ""
	I1014 19:50:52.548496  443658 logs.go:282] 0 containers: []
	W1014 19:50:52.548503  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:50:52.548509  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:50:52.548571  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:50:52.576240  443658 cri.go:89] found id: ""
	I1014 19:50:52.576260  443658 logs.go:282] 0 containers: []
	W1014 19:50:52.576272  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:50:52.576278  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:50:52.576345  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:50:52.604501  443658 cri.go:89] found id: ""
	I1014 19:50:52.604519  443658 logs.go:282] 0 containers: []
	W1014 19:50:52.604529  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:50:52.604535  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:50:52.604605  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:50:52.636730  443658 cri.go:89] found id: ""
	I1014 19:50:52.636746  443658 logs.go:282] 0 containers: []
	W1014 19:50:52.636777  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:50:52.636789  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:50:52.636802  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:50:52.708243  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:50:52.708275  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:50:52.726867  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:50:52.726890  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:50:52.785730  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:50:52.778588   14184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:52.779176   14184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:52.780807   14184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:52.781257   14184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:52.782451   14184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:50:52.778588   14184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:52.779176   14184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:52.780807   14184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:52.781257   14184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:52.782451   14184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:50:52.785743  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:50:52.785783  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:50:52.849671  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:50:52.849695  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:50:55.381592  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:50:55.393025  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:50:55.393093  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:50:55.422130  443658 cri.go:89] found id: ""
	I1014 19:50:55.422150  443658 logs.go:282] 0 containers: []
	W1014 19:50:55.422159  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:50:55.422166  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:50:55.422225  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:50:55.449578  443658 cri.go:89] found id: ""
	I1014 19:50:55.449593  443658 logs.go:282] 0 containers: []
	W1014 19:50:55.449599  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:50:55.449606  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:50:55.449652  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:50:55.478330  443658 cri.go:89] found id: ""
	I1014 19:50:55.478349  443658 logs.go:282] 0 containers: []
	W1014 19:50:55.478359  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:50:55.478366  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:50:55.478418  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:50:55.506046  443658 cri.go:89] found id: ""
	I1014 19:50:55.506062  443658 logs.go:282] 0 containers: []
	W1014 19:50:55.506069  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:50:55.506075  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:50:55.506121  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:50:55.533431  443658 cri.go:89] found id: ""
	I1014 19:50:55.533448  443658 logs.go:282] 0 containers: []
	W1014 19:50:55.533460  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:50:55.533464  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:50:55.533512  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:50:55.559554  443658 cri.go:89] found id: ""
	I1014 19:50:55.559571  443658 logs.go:282] 0 containers: []
	W1014 19:50:55.559579  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:50:55.559583  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:50:55.559628  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:50:55.586490  443658 cri.go:89] found id: ""
	I1014 19:50:55.586506  443658 logs.go:282] 0 containers: []
	W1014 19:50:55.586513  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:50:55.586522  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:50:55.586533  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:50:55.654422  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:50:55.654447  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:50:55.673174  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:50:55.673195  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:50:55.732549  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:50:55.725166   14316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:55.725836   14316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:55.727380   14316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:55.727867   14316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:55.729272   14316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:50:55.725166   14316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:55.725836   14316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:55.727380   14316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:55.727867   14316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:55.729272   14316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:50:55.732565  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:50:55.732578  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:50:55.798718  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:50:55.798747  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:50:58.332284  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:50:58.343801  443658 kubeadm.go:601] duration metric: took 4m4.243920348s to restartPrimaryControlPlane
	W1014 19:50:58.343901  443658 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1014 19:50:58.344005  443658 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1014 19:50:58.799455  443658 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 19:50:58.813683  443658 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 19:50:58.822431  443658 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1014 19:50:58.822479  443658 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 19:50:58.830731  443658 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 19:50:58.830743  443658 kubeadm.go:157] found existing configuration files:
	
	I1014 19:50:58.830813  443658 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1014 19:50:58.838788  443658 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 19:50:58.838843  443658 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 19:50:58.846629  443658 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1014 19:50:58.854899  443658 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 19:50:58.854960  443658 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 19:50:58.862796  443658 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1014 19:50:58.870845  443658 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 19:50:58.870900  443658 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 19:50:58.878602  443658 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1014 19:50:58.886687  443658 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 19:50:58.886812  443658 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 19:50:58.894706  443658 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1014 19:50:58.956049  443658 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1014 19:50:59.017911  443658 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1014 19:55:01.512196  443658 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused]
	I1014 19:55:01.512300  443658 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1014 19:55:01.515811  443658 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1014 19:55:01.515863  443658 kubeadm.go:318] [preflight] Running pre-flight checks
	I1014 19:55:01.515937  443658 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1014 19:55:01.515981  443658 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1014 19:55:01.516011  443658 kubeadm.go:318] OS: Linux
	I1014 19:55:01.516049  443658 kubeadm.go:318] CGROUPS_CPU: enabled
	I1014 19:55:01.516087  443658 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1014 19:55:01.516133  443658 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1014 19:55:01.516172  443658 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1014 19:55:01.516210  443658 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1014 19:55:01.516249  443658 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1014 19:55:01.516288  443658 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1014 19:55:01.516322  443658 kubeadm.go:318] CGROUPS_IO: enabled
	I1014 19:55:01.516431  443658 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1014 19:55:01.516587  443658 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1014 19:55:01.516701  443658 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1014 19:55:01.516795  443658 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1014 19:55:01.519360  443658 out.go:252]   - Generating certificates and keys ...
	I1014 19:55:01.519469  443658 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1014 19:55:01.519557  443658 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1014 19:55:01.519666  443658 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1014 19:55:01.519744  443658 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1014 19:55:01.519850  443658 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1014 19:55:01.519914  443658 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1014 19:55:01.519978  443658 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1014 19:55:01.520034  443658 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1014 19:55:01.520097  443658 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1014 19:55:01.520167  443658 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1014 19:55:01.520203  443658 kubeadm.go:318] [certs] Using the existing "sa" key
	I1014 19:55:01.520251  443658 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1014 19:55:01.520299  443658 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1014 19:55:01.520348  443658 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1014 19:55:01.520393  443658 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1014 19:55:01.520450  443658 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1014 19:55:01.520499  443658 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1014 19:55:01.520576  443658 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1014 19:55:01.520641  443658 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1014 19:55:01.523229  443658 out.go:252]   - Booting up control plane ...
	I1014 19:55:01.523319  443658 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1014 19:55:01.523390  443658 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1014 19:55:01.523444  443658 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1014 19:55:01.523551  443658 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1014 19:55:01.523641  443658 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1014 19:55:01.523810  443658 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1014 19:55:01.523922  443658 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1014 19:55:01.523954  443658 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1014 19:55:01.524086  443658 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1014 19:55:01.524181  443658 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1014 19:55:01.524234  443658 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.568458ms
	I1014 19:55:01.524321  443658 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1014 19:55:01.524389  443658 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	I1014 19:55:01.524486  443658 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1014 19:55:01.524591  443658 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1014 19:55:01.524662  443658 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000296304s
	I1014 19:55:01.524728  443658 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000399838s
	I1014 19:55:01.524840  443658 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000393905s
	I1014 19:55:01.524843  443658 kubeadm.go:318] 
	I1014 19:55:01.524928  443658 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1014 19:55:01.525021  443658 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1014 19:55:01.525148  443658 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1014 19:55:01.525276  443658 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1014 19:55:01.525390  443658 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1014 19:55:01.525475  443658 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1014 19:55:01.525507  443658 kubeadm.go:318] 
	W1014 19:55:01.525679  443658 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.568458ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000296304s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000399838s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000393905s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1014 19:55:01.525798  443658 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1014 19:55:01.982887  443658 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 19:55:01.996173  443658 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1014 19:55:01.996227  443658 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 19:55:02.004750  443658 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 19:55:02.004776  443658 kubeadm.go:157] found existing configuration files:
	
	I1014 19:55:02.004817  443658 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1014 19:55:02.013003  443658 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 19:55:02.013070  443658 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 19:55:02.021099  443658 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1014 19:55:02.029431  443658 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 19:55:02.029492  443658 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 19:55:02.037121  443658 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1014 19:55:02.045152  443658 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 19:55:02.045198  443658 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 19:55:02.052887  443658 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1014 19:55:02.060584  443658 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 19:55:02.060626  443658 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 19:55:02.068308  443658 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1014 19:55:02.126727  443658 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1014 19:55:02.188353  443658 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1014 19:59:05.052390  443658 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1014 19:59:05.052568  443658 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1014 19:59:05.055525  443658 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1014 19:59:05.055579  443658 kubeadm.go:318] [preflight] Running pre-flight checks
	I1014 19:59:05.055669  443658 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1014 19:59:05.055719  443658 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1014 19:59:05.055746  443658 kubeadm.go:318] OS: Linux
	I1014 19:59:05.055802  443658 kubeadm.go:318] CGROUPS_CPU: enabled
	I1014 19:59:05.055840  443658 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1014 19:59:05.055878  443658 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1014 19:59:05.055926  443658 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1014 19:59:05.055963  443658 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1014 19:59:05.056004  443658 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1014 19:59:05.056049  443658 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1014 19:59:05.056084  443658 kubeadm.go:318] CGROUPS_IO: enabled
	I1014 19:59:05.056142  443658 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1014 19:59:05.056223  443658 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1014 19:59:05.056299  443658 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1014 19:59:05.056392  443658 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1014 19:59:05.059274  443658 out.go:252]   - Generating certificates and keys ...
	I1014 19:59:05.059351  443658 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1014 19:59:05.059415  443658 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1014 19:59:05.059493  443658 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1014 19:59:05.059567  443658 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1014 19:59:05.059629  443658 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1014 19:59:05.059672  443658 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1014 19:59:05.059751  443658 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1014 19:59:05.059826  443658 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1014 19:59:05.059887  443658 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1014 19:59:05.059966  443658 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1014 19:59:05.060015  443658 kubeadm.go:318] [certs] Using the existing "sa" key
	I1014 19:59:05.060080  443658 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1014 19:59:05.060144  443658 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1014 19:59:05.060195  443658 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1014 19:59:05.060238  443658 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1014 19:59:05.060288  443658 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1014 19:59:05.060337  443658 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1014 19:59:05.060403  443658 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1014 19:59:05.060483  443658 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1014 19:59:05.061914  443658 out.go:252]   - Booting up control plane ...
	I1014 19:59:05.062009  443658 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1014 19:59:05.062118  443658 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1014 19:59:05.062251  443658 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1014 19:59:05.062371  443658 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1014 19:59:05.062470  443658 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1014 19:59:05.062594  443658 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1014 19:59:05.062668  443658 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1014 19:59:05.062709  443658 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1014 19:59:05.062894  443658 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1014 19:59:05.063001  443658 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1014 19:59:05.063067  443658 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001430917s
	I1014 19:59:05.063161  443658 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1014 19:59:05.063245  443658 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	I1014 19:59:05.063317  443658 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1014 19:59:05.063385  443658 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1014 19:59:05.063443  443658 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000444633s
	I1014 19:59:05.063502  443658 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000481701s
	I1014 19:59:05.063588  443658 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000675884s
	I1014 19:59:05.063599  443658 kubeadm.go:318] 
	I1014 19:59:05.063715  443658 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1014 19:59:05.063820  443658 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1014 19:59:05.063899  443658 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1014 19:59:05.064013  443658 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1014 19:59:05.064087  443658 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1014 19:59:05.064169  443658 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1014 19:59:05.064205  443658 kubeadm.go:318] 
	I1014 19:59:05.064256  443658 kubeadm.go:402] duration metric: took 12m11.001770383s to StartCluster
	I1014 19:59:05.064319  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:59:05.064377  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:59:05.094590  443658 cri.go:89] found id: ""
	I1014 19:59:05.094608  443658 logs.go:282] 0 containers: []
	W1014 19:59:05.094615  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:59:05.094620  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:59:05.094695  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:59:05.123951  443658 cri.go:89] found id: ""
	I1014 19:59:05.123969  443658 logs.go:282] 0 containers: []
	W1014 19:59:05.123989  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:59:05.123996  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:59:05.124057  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:59:05.153788  443658 cri.go:89] found id: ""
	I1014 19:59:05.153806  443658 logs.go:282] 0 containers: []
	W1014 19:59:05.153813  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:59:05.153818  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:59:05.153866  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:59:05.182209  443658 cri.go:89] found id: ""
	I1014 19:59:05.182227  443658 logs.go:282] 0 containers: []
	W1014 19:59:05.182233  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:59:05.182239  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:59:05.182295  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:59:05.211682  443658 cri.go:89] found id: ""
	I1014 19:59:05.211743  443658 logs.go:282] 0 containers: []
	W1014 19:59:05.211773  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:59:05.211787  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:59:05.211840  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:59:05.239904  443658 cri.go:89] found id: ""
	I1014 19:59:05.239927  443658 logs.go:282] 0 containers: []
	W1014 19:59:05.239935  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:59:05.239942  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:59:05.239993  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:59:05.266617  443658 cri.go:89] found id: ""
	I1014 19:59:05.266636  443658 logs.go:282] 0 containers: []
	W1014 19:59:05.266643  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:59:05.266710  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:59:05.266747  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:59:05.284891  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:59:05.284919  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:59:05.345910  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:59:05.338670   15637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:59:05.339278   15637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:59:05.340773   15637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:59:05.341189   15637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:59:05.342723   15637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:59:05.338670   15637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:59:05.339278   15637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:59:05.340773   15637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:59:05.341189   15637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:59:05.342723   15637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:59:05.345933  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:59:05.345953  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:59:05.410981  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:59:05.411011  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:59:05.441593  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:59:05.441611  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1014 19:59:05.511762  443658 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001430917s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000444633s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000481701s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000675884s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1014 19:59:05.511841  443658 out.go:285] * 
	W1014 19:59:05.511933  443658 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001430917s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000444633s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000481701s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000675884s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1014 19:59:05.511948  443658 out.go:285] * 
	W1014 19:59:05.513702  443658 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 19:59:05.517408  443658 out.go:203] 
	W1014 19:59:05.518938  443658 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001430917s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000444633s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000481701s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000675884s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1014 19:59:05.518965  443658 out.go:285] * 
	I1014 19:59:05.520443  443658 out.go:203] 
	
	
	==> CRI-O <==
	Oct 14 19:58:58 functional-744288 crio[5849]: time="2025-10-14T19:58:58.614518066Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 19:58:58 functional-744288 crio[5849]: time="2025-10-14T19:58:58.617792539Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 19:58:58 functional-744288 crio[5849]: time="2025-10-14T19:58:58.618378358Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 19:58:58 functional-744288 crio[5849]: time="2025-10-14T19:58:58.634925344Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=f3386d3c-bc60-4033-afdf-c1e91baa2cb1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:58:58 functional-744288 crio[5849]: time="2025-10-14T19:58:58.636235646Z" level=info msg="createCtr: deleting container ID 65f0dc73ec9ea69e31501c976b4433418c103bbf0b3ac355e8829c0387caf4fa from idIndex" id=f3386d3c-bc60-4033-afdf-c1e91baa2cb1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:58:58 functional-744288 crio[5849]: time="2025-10-14T19:58:58.636277209Z" level=info msg="createCtr: removing container 65f0dc73ec9ea69e31501c976b4433418c103bbf0b3ac355e8829c0387caf4fa" id=f3386d3c-bc60-4033-afdf-c1e91baa2cb1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:58:58 functional-744288 crio[5849]: time="2025-10-14T19:58:58.636310381Z" level=info msg="createCtr: deleting container 65f0dc73ec9ea69e31501c976b4433418c103bbf0b3ac355e8829c0387caf4fa from storage" id=f3386d3c-bc60-4033-afdf-c1e91baa2cb1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:58:58 functional-744288 crio[5849]: time="2025-10-14T19:58:58.638326871Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-functional-744288_kube-system_b1fd55382fcf5a735f17d7c6c4ddad91_0" id=f3386d3c-bc60-4033-afdf-c1e91baa2cb1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:58:59 functional-744288 crio[5849]: time="2025-10-14T19:58:59.611499521Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=1d103bae-e0cd-43b6-a8b9-21dbf6ee25eb name=/runtime.v1.ImageService/ImageStatus
	Oct 14 19:58:59 functional-744288 crio[5849]: time="2025-10-14T19:58:59.612463811Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=5ccb7435-2feb-4843-b580-b73b2136ca02 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 19:58:59 functional-744288 crio[5849]: time="2025-10-14T19:58:59.613443601Z" level=info msg="Creating container: kube-system/kube-apiserver-functional-744288/kube-apiserver" id=e84811a9-7a59-4792-8343-435666edc285 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:58:59 functional-744288 crio[5849]: time="2025-10-14T19:58:59.613680478Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 19:58:59 functional-744288 crio[5849]: time="2025-10-14T19:58:59.618067642Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 19:58:59 functional-744288 crio[5849]: time="2025-10-14T19:58:59.6186241Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 19:58:59 functional-744288 crio[5849]: time="2025-10-14T19:58:59.638374963Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=e84811a9-7a59-4792-8343-435666edc285 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:58:59 functional-744288 crio[5849]: time="2025-10-14T19:58:59.640112713Z" level=info msg="createCtr: deleting container ID e6170d040b69887f4e204511d672261a5b0442c88d3d9199109a75deab8a7473 from idIndex" id=e84811a9-7a59-4792-8343-435666edc285 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:58:59 functional-744288 crio[5849]: time="2025-10-14T19:58:59.640173593Z" level=info msg="createCtr: removing container e6170d040b69887f4e204511d672261a5b0442c88d3d9199109a75deab8a7473" id=e84811a9-7a59-4792-8343-435666edc285 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:58:59 functional-744288 crio[5849]: time="2025-10-14T19:58:59.640224188Z" level=info msg="createCtr: deleting container e6170d040b69887f4e204511d672261a5b0442c88d3d9199109a75deab8a7473 from storage" id=e84811a9-7a59-4792-8343-435666edc285 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:58:59 functional-744288 crio[5849]: time="2025-10-14T19:58:59.642655814Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-functional-744288_kube-system_5ce31098ce493b77069c880f0c6ac8e6_0" id=e84811a9-7a59-4792-8343-435666edc285 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:59:06 functional-744288 crio[5849]: time="2025-10-14T19:59:06.610817996Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=372c4d4e-6b47-4045-8ba6-b6b7e22a7cf5 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 19:59:06 functional-744288 crio[5849]: time="2025-10-14T19:59:06.611997294Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=d4bbc804-55a9-4018-bb4f-cabaff200ebf name=/runtime.v1.ImageService/ImageStatus
	Oct 14 19:59:06 functional-744288 crio[5849]: time="2025-10-14T19:59:06.613018254Z" level=info msg="Creating container: kube-system/kube-scheduler-functional-744288/kube-scheduler" id=2f1ab279-d80f-4567-a165-3cd4a2d97179 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:59:06 functional-744288 crio[5849]: time="2025-10-14T19:59:06.613300745Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 19:59:06 functional-744288 crio[5849]: time="2025-10-14T19:59:06.617547351Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 19:59:06 functional-744288 crio[5849]: time="2025-10-14T19:59:06.618068516Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:59:06.731833   15792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:59:06.732412   15792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:59:06.734248   15792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:59:06.734887   15792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:59:06.736448   15792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 91 42 a8 46 f5 08 06
	[ +33.952077] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff aa 17 60 38 7c 63 08 06
	[  +0.961658] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ba 77 81 98 05 a1 08 06
	[  +0.043334] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 8a 2f 57 55 4c d7 08 06
	[  +4.681081] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f6 ac 51 a4 b2 5a 08 06
	[Oct14 19:04] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e6 d1 de e1 85 25 08 06
	[  +0.899784] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ae f2 fe 51 3d 95 08 06
	[  +0.040749] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 82 c6 36 89 b8 4a 08 06
	[  +4.162603] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c6 62 d5 5c 0e ef 08 06
	[ +30.801371] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 06 ae e5 28 c7 39 08 06
	[  +0.817897] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ae 5e 5e 90 62 1a 08 06
	[  +0.046959] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 96 8f 1b bd b9 9f 08 06
	[Oct14 19:05] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 56 59 c3 7c db c4 08 06
	
	
	==> kernel <==
	 19:59:06 up  2:41,  0 user,  load average: 0.31, 0.15, 1.10
	Linux functional-744288 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 14 19:58:58 functional-744288 kubelet[15039]:         container kube-controller-manager start failed in pod kube-controller-manager-functional-744288_kube-system(b1fd55382fcf5a735f17d7c6c4ddad91): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 14 19:58:58 functional-744288 kubelet[15039]:  > logger="UnhandledError"
	Oct 14 19:58:58 functional-744288 kubelet[15039]: E1014 19:58:58.638817   15039 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-functional-744288" podUID="b1fd55382fcf5a735f17d7c6c4ddad91"
	Oct 14 19:58:59 functional-744288 kubelet[15039]: E1014 19:58:59.611005   15039 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-744288\" not found" node="functional-744288"
	Oct 14 19:58:59 functional-744288 kubelet[15039]: E1014 19:58:59.643037   15039 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 14 19:58:59 functional-744288 kubelet[15039]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 14 19:58:59 functional-744288 kubelet[15039]:  > podSandboxID="3c54d3192ed1a94339d7aeaa1e4937313dec117490489404c0f549da6defb72e"
	Oct 14 19:58:59 functional-744288 kubelet[15039]: E1014 19:58:59.643143   15039 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 14 19:58:59 functional-744288 kubelet[15039]:         container kube-apiserver start failed in pod kube-apiserver-functional-744288_kube-system(5ce31098ce493b77069c880f0c6ac8e6): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 14 19:58:59 functional-744288 kubelet[15039]:  > logger="UnhandledError"
	Oct 14 19:58:59 functional-744288 kubelet[15039]: E1014 19:58:59.643181   15039 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-functional-744288" podUID="5ce31098ce493b77069c880f0c6ac8e6"
	Oct 14 19:59:01 functional-744288 kubelet[15039]: E1014 19:59:01.234434   15039 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-744288?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Oct 14 19:59:01 functional-744288 kubelet[15039]: I1014 19:59:01.389707   15039 kubelet_node_status.go:75] "Attempting to register node" node="functional-744288"
	Oct 14 19:59:01 functional-744288 kubelet[15039]: E1014 19:59:01.390137   15039 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-744288"
	Oct 14 19:59:04 functional-744288 kubelet[15039]: E1014 19:59:04.623685   15039 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-744288\" not found"
	Oct 14 19:59:05 functional-744288 kubelet[15039]: E1014 19:59:05.375495   15039 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8441/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-744288.186e73b01ddb1340  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-744288,UID:functional-744288,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-744288 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-744288,},FirstTimestamp:2025-10-14 19:55:04.600777536 +0000 UTC m=+0.555327311,LastTimestamp:2025-10-14 19:55:04.600777536 +0000 UTC m=+0.555327311,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-744288,}"
	Oct 14 19:59:05 functional-744288 kubelet[15039]: E1014 19:59:05.963951   15039 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	Oct 14 19:59:06 functional-744288 kubelet[15039]: E1014 19:59:06.610362   15039 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-744288\" not found" node="functional-744288"
	Oct 14 19:59:06 functional-744288 kubelet[15039]: E1014 19:59:06.639866   15039 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 14 19:59:06 functional-744288 kubelet[15039]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 14 19:59:06 functional-744288 kubelet[15039]:  > podSandboxID="6db547a209d52d0398507b1da96eecbcd999edc615f9bed4939047b6f878db45"
	Oct 14 19:59:06 functional-744288 kubelet[15039]: E1014 19:59:06.640022   15039 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 14 19:59:06 functional-744288 kubelet[15039]:         container kube-scheduler start failed in pod kube-scheduler-functional-744288_kube-system(e9679524bf37cc2b727411d0e5a93bfe): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 14 19:59:06 functional-744288 kubelet[15039]:  > logger="UnhandledError"
	Oct 14 19:59:06 functional-744288 kubelet[15039]: E1014 19:59:06.640064   15039 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-functional-744288" podUID="e9679524bf37cc2b727411d0e5a93bfe"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-744288 -n functional-744288
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-744288 -n functional-744288: exit status 2 (310.619115ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-744288" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/ExtraConfig (737.02s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (1.96s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-744288 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: (dbg) Non-zero exit: kubectl --context functional-744288 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (52.879724ms)

                                                
                                                
** stderr ** 
	E1014 19:59:07.516383  456997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1014 19:59:07.516826  456997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1014 19:59:07.518074  456997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1014 19:59:07.518296  456997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1014 19:59:07.519717  456997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:827: failed to get components. args "kubectl --context functional-744288 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/serial/ComponentHealth]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/serial/ComponentHealth]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-744288
helpers_test.go:243: (dbg) docker inspect functional-744288:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ca55a7aa3ac1a055c5c96669019f558795a313505e680734901ed4465642a23a",
	        "Created": "2025-10-14T19:32:11.700856501Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 432350,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-14T19:32:11.736879267Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/ca55a7aa3ac1a055c5c96669019f558795a313505e680734901ed4465642a23a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ca55a7aa3ac1a055c5c96669019f558795a313505e680734901ed4465642a23a/hostname",
	        "HostsPath": "/var/lib/docker/containers/ca55a7aa3ac1a055c5c96669019f558795a313505e680734901ed4465642a23a/hosts",
	        "LogPath": "/var/lib/docker/containers/ca55a7aa3ac1a055c5c96669019f558795a313505e680734901ed4465642a23a/ca55a7aa3ac1a055c5c96669019f558795a313505e680734901ed4465642a23a-json.log",
	        "Name": "/functional-744288",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-744288:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-744288",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ca55a7aa3ac1a055c5c96669019f558795a313505e680734901ed4465642a23a",
	                "LowerDir": "/var/lib/docker/overlay2/7474e2653fad8f96d0b2edb2947dba52f432e8f6324d88224718eafd377e5e8b-init/diff:/var/lib/docker/overlay2/51cca15559205c73df9571f03495ed8a29f085405673af5fef2c1ba7c695d8af/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7474e2653fad8f96d0b2edb2947dba52f432e8f6324d88224718eafd377e5e8b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7474e2653fad8f96d0b2edb2947dba52f432e8f6324d88224718eafd377e5e8b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7474e2653fad8f96d0b2edb2947dba52f432e8f6324d88224718eafd377e5e8b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-744288",
	                "Source": "/var/lib/docker/volumes/functional-744288/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-744288",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-744288",
	                "name.minikube.sigs.k8s.io": "functional-744288",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "697bcb3f8caffcc21274484944886a08e42751728789c447339536a0c178eab6",
	            "SandboxKey": "/var/run/docker/netns/697bcb3f8caf",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32898"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32899"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32902"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32900"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32901"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-744288": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b6:cd:04:2d:45:3c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a03a168f8787c5e4cc68b4fb2601645260a265eddce7ef101976f25cd32bdabb",
	                    "EndpointID": "c7c49faacc3b14ce4dd3f84ad70993167a6a8b1490739ae12d7ca1980d176ee7",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-744288",
	                        "ca55a7aa3ac1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-744288 -n functional-744288
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-744288 -n functional-744288: exit status 2 (312.441444ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctional/serial/ComponentHealth FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/serial/ComponentHealth]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-744288 logs -n 25
helpers_test.go:260: TestFunctional/serial/ComponentHealth logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                     ARGS                                                      │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ unpause │ nospam-442016 --log_dir /tmp/nospam-442016 unpause                                                            │ nospam-442016     │ jenkins │ v1.37.0 │ 14 Oct 25 19:31 UTC │ 14 Oct 25 19:31 UTC │
	│ unpause │ nospam-442016 --log_dir /tmp/nospam-442016 unpause                                                            │ nospam-442016     │ jenkins │ v1.37.0 │ 14 Oct 25 19:31 UTC │ 14 Oct 25 19:31 UTC │
	│ unpause │ nospam-442016 --log_dir /tmp/nospam-442016 unpause                                                            │ nospam-442016     │ jenkins │ v1.37.0 │ 14 Oct 25 19:31 UTC │ 14 Oct 25 19:32 UTC │
	│ stop    │ nospam-442016 --log_dir /tmp/nospam-442016 stop                                                               │ nospam-442016     │ jenkins │ v1.37.0 │ 14 Oct 25 19:32 UTC │ 14 Oct 25 19:32 UTC │
	│ stop    │ nospam-442016 --log_dir /tmp/nospam-442016 stop                                                               │ nospam-442016     │ jenkins │ v1.37.0 │ 14 Oct 25 19:32 UTC │ 14 Oct 25 19:32 UTC │
	│ stop    │ nospam-442016 --log_dir /tmp/nospam-442016 stop                                                               │ nospam-442016     │ jenkins │ v1.37.0 │ 14 Oct 25 19:32 UTC │ 14 Oct 25 19:32 UTC │
	│ delete  │ -p nospam-442016                                                                                              │ nospam-442016     │ jenkins │ v1.37.0 │ 14 Oct 25 19:32 UTC │ 14 Oct 25 19:32 UTC │
	│ start   │ -p functional-744288 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:32 UTC │                     │
	│ start   │ -p functional-744288 --alsologtostderr -v=8                                                                   │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:40 UTC │                     │
	│ cache   │ functional-744288 cache add registry.k8s.io/pause:3.1                                                         │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:46 UTC │ 14 Oct 25 19:46 UTC │
	│ cache   │ functional-744288 cache add registry.k8s.io/pause:3.3                                                         │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:46 UTC │ 14 Oct 25 19:46 UTC │
	│ cache   │ functional-744288 cache add registry.k8s.io/pause:latest                                                      │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:46 UTC │ 14 Oct 25 19:46 UTC │
	│ cache   │ functional-744288 cache add minikube-local-cache-test:functional-744288                                       │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:46 UTC │ 14 Oct 25 19:46 UTC │
	│ cache   │ functional-744288 cache delete minikube-local-cache-test:functional-744288                                    │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:46 UTC │ 14 Oct 25 19:46 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                              │ minikube          │ jenkins │ v1.37.0 │ 14 Oct 25 19:46 UTC │ 14 Oct 25 19:46 UTC │
	│ cache   │ list                                                                                                          │ minikube          │ jenkins │ v1.37.0 │ 14 Oct 25 19:46 UTC │ 14 Oct 25 19:46 UTC │
	│ ssh     │ functional-744288 ssh sudo crictl images                                                                      │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:46 UTC │ 14 Oct 25 19:46 UTC │
	│ ssh     │ functional-744288 ssh sudo crictl rmi registry.k8s.io/pause:latest                                            │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:46 UTC │ 14 Oct 25 19:46 UTC │
	│ ssh     │ functional-744288 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                       │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:46 UTC │                     │
	│ cache   │ functional-744288 cache reload                                                                                │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:46 UTC │ 14 Oct 25 19:46 UTC │
	│ ssh     │ functional-744288 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                       │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:46 UTC │ 14 Oct 25 19:46 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                              │ minikube          │ jenkins │ v1.37.0 │ 14 Oct 25 19:46 UTC │ 14 Oct 25 19:46 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                           │ minikube          │ jenkins │ v1.37.0 │ 14 Oct 25 19:46 UTC │ 14 Oct 25 19:46 UTC │
	│ kubectl │ functional-744288 kubectl -- --context functional-744288 get pods                                             │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:46 UTC │                     │
	│ start   │ -p functional-744288 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all      │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:46 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/14 19:46:50
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1014 19:46:50.499742  443658 out.go:360] Setting OutFile to fd 1 ...
	I1014 19:46:50.500016  443658 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 19:46:50.500020  443658 out.go:374] Setting ErrFile to fd 2...
	I1014 19:46:50.500023  443658 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 19:46:50.500243  443658 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-413763/.minikube/bin
	I1014 19:46:50.500711  443658 out.go:368] Setting JSON to false
	I1014 19:46:50.501776  443658 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":8957,"bootTime":1760462254,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1014 19:46:50.501876  443658 start.go:141] virtualization: kvm guest
	I1014 19:46:50.504465  443658 out.go:179] * [functional-744288] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1014 19:46:50.505861  443658 out.go:179]   - MINIKUBE_LOCATION=21409
	I1014 19:46:50.505882  443658 notify.go:220] Checking for updates...
	I1014 19:46:50.508327  443658 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 19:46:50.509750  443658 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-413763/kubeconfig
	I1014 19:46:50.510866  443658 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-413763/.minikube
	I1014 19:46:50.511854  443658 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1014 19:46:50.512854  443658 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 19:46:50.514315  443658 config.go:182] Loaded profile config "functional-744288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 19:46:50.514426  443658 driver.go:421] Setting default libvirt URI to qemu:///system
	I1014 19:46:50.538310  443658 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1014 19:46:50.538445  443658 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 19:46:50.601114  443658 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:false NGoroutines:58 SystemTime:2025-10-14 19:46:50.588718622 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1014 19:46:50.601209  443658 docker.go:318] overlay module found
	I1014 19:46:50.603086  443658 out.go:179] * Using the docker driver based on existing profile
	I1014 19:46:50.604379  443658 start.go:305] selected driver: docker
	I1014 19:46:50.604388  443658 start.go:925] validating driver "docker" against &{Name:functional-744288 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-744288 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 19:46:50.604469  443658 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 19:46:50.604549  443658 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 19:46:50.666156  443658 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:false NGoroutines:58 SystemTime:2025-10-14 19:46:50.655387801 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1014 19:46:50.666705  443658 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 19:46:50.666723  443658 cni.go:84] Creating CNI manager for ""
	I1014 19:46:50.666779  443658 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1014 19:46:50.666824  443658 start.go:349] cluster config:
	{Name:functional-744288 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-744288 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 19:46:50.668890  443658 out.go:179] * Starting "functional-744288" primary control-plane node in "functional-744288" cluster
	I1014 19:46:50.670269  443658 cache.go:123] Beginning downloading kic base image for docker with crio
	I1014 19:46:50.671700  443658 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1014 19:46:50.672853  443658 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 19:46:50.672887  443658 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-413763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1014 19:46:50.672894  443658 cache.go:58] Caching tarball of preloaded images
	I1014 19:46:50.672978  443658 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1014 19:46:50.672993  443658 preload.go:233] Found /home/jenkins/minikube-integration/21409-413763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1014 19:46:50.673002  443658 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1014 19:46:50.673099  443658 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/config.json ...
	I1014 19:46:50.694236  443658 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1014 19:46:50.694247  443658 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1014 19:46:50.694262  443658 cache.go:232] Successfully downloaded all kic artifacts
	I1014 19:46:50.694285  443658 start.go:360] acquireMachinesLock for functional-744288: {Name:mk27c3a9a4edec1c99a109c410361619ff35ec14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 19:46:50.694339  443658 start.go:364] duration metric: took 40.961µs to acquireMachinesLock for "functional-744288"
	I1014 19:46:50.694355  443658 start.go:96] Skipping create...Using existing machine configuration
	I1014 19:46:50.694359  443658 fix.go:54] fixHost starting: 
	I1014 19:46:50.694551  443658 cli_runner.go:164] Run: docker container inspect functional-744288 --format={{.State.Status}}
	I1014 19:46:50.713829  443658 fix.go:112] recreateIfNeeded on functional-744288: state=Running err=<nil>
	W1014 19:46:50.713852  443658 fix.go:138] unexpected machine state, will restart: <nil>
	I1014 19:46:50.716011  443658 out.go:252] * Updating the running docker "functional-744288" container ...
	I1014 19:46:50.716063  443658 machine.go:93] provisionDockerMachine start ...
	I1014 19:46:50.716145  443658 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-744288
	I1014 19:46:50.734693  443658 main.go:141] libmachine: Using SSH client type: native
	I1014 19:46:50.734948  443658 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32898 <nil> <nil>}
	I1014 19:46:50.734956  443658 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 19:46:50.881904  443658 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-744288
	
	I1014 19:46:50.881928  443658 ubuntu.go:182] provisioning hostname "functional-744288"
	I1014 19:46:50.882024  443658 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-744288
	I1014 19:46:50.900923  443658 main.go:141] libmachine: Using SSH client type: native
	I1014 19:46:50.901187  443658 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32898 <nil> <nil>}
	I1014 19:46:50.901202  443658 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-744288 && echo "functional-744288" | sudo tee /etc/hostname
	I1014 19:46:51.056989  443658 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-744288
	
	I1014 19:46:51.057085  443658 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-744288
	I1014 19:46:51.074806  443658 main.go:141] libmachine: Using SSH client type: native
	I1014 19:46:51.075019  443658 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32898 <nil> <nil>}
	I1014 19:46:51.075030  443658 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-744288' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-744288/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-744288' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 19:46:51.221854  443658 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 19:46:51.221878  443658 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-413763/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-413763/.minikube}
	I1014 19:46:51.221910  443658 ubuntu.go:190] setting up certificates
	I1014 19:46:51.221952  443658 provision.go:84] configureAuth start
	I1014 19:46:51.222015  443658 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-744288
	I1014 19:46:51.240005  443658 provision.go:143] copyHostCerts
	I1014 19:46:51.240069  443658 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem, removing ...
	I1014 19:46:51.240090  443658 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem
	I1014 19:46:51.240177  443658 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem (1078 bytes)
	I1014 19:46:51.240322  443658 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem, removing ...
	I1014 19:46:51.240330  443658 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem
	I1014 19:46:51.240371  443658 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem (1123 bytes)
	I1014 19:46:51.240443  443658 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem, removing ...
	I1014 19:46:51.240447  443658 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem
	I1014 19:46:51.240478  443658 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem (1675 bytes)
	I1014 19:46:51.240545  443658 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca-key.pem org=jenkins.functional-744288 san=[127.0.0.1 192.168.49.2 functional-744288 localhost minikube]
	I1014 19:46:51.277418  443658 provision.go:177] copyRemoteCerts
	I1014 19:46:51.277469  443658 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 19:46:51.277512  443658 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-744288
	I1014 19:46:51.295935  443658 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/functional-744288/id_rsa Username:docker}
	I1014 19:46:51.399940  443658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 19:46:51.419014  443658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1014 19:46:51.436411  443658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1014 19:46:51.453971  443658 provision.go:87] duration metric: took 232.002826ms to configureAuth
	I1014 19:46:51.453999  443658 ubuntu.go:206] setting minikube options for container-runtime
	I1014 19:46:51.454155  443658 config.go:182] Loaded profile config "functional-744288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 19:46:51.454253  443658 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-744288
	I1014 19:46:51.471667  443658 main.go:141] libmachine: Using SSH client type: native
	I1014 19:46:51.471917  443658 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32898 <nil> <nil>}
	I1014 19:46:51.471928  443658 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 19:46:51.753714  443658 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 19:46:51.753736  443658 machine.go:96] duration metric: took 1.037666418s to provisionDockerMachine
	I1014 19:46:51.753750  443658 start.go:293] postStartSetup for "functional-744288" (driver="docker")
	I1014 19:46:51.753791  443658 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 19:46:51.753870  443658 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 19:46:51.753924  443658 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-744288
	I1014 19:46:51.771894  443658 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/functional-744288/id_rsa Username:docker}
	I1014 19:46:51.875275  443658 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 19:46:51.879014  443658 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1014 19:46:51.879036  443658 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1014 19:46:51.879053  443658 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-413763/.minikube/addons for local assets ...
	I1014 19:46:51.879110  443658 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-413763/.minikube/files for local assets ...
	I1014 19:46:51.879192  443658 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem -> 4173732.pem in /etc/ssl/certs
	I1014 19:46:51.879264  443658 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/test/nested/copy/417373/hosts -> hosts in /etc/test/nested/copy/417373
	I1014 19:46:51.879295  443658 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/417373
	I1014 19:46:51.887031  443658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem --> /etc/ssl/certs/4173732.pem (1708 bytes)
	I1014 19:46:51.905744  443658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/test/nested/copy/417373/hosts --> /etc/test/nested/copy/417373/hosts (40 bytes)
	I1014 19:46:51.923826  443658 start.go:296] duration metric: took 170.03666ms for postStartSetup
	I1014 19:46:51.923911  443658 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1014 19:46:51.923959  443658 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-744288
	I1014 19:46:51.942362  443658 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/functional-744288/id_rsa Username:docker}
	I1014 19:46:52.043778  443658 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1014 19:46:52.048837  443658 fix.go:56] duration metric: took 1.354467438s for fixHost
	I1014 19:46:52.048860  443658 start.go:83] releasing machines lock for "functional-744288", held for 1.354513179s
	I1014 19:46:52.048940  443658 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-744288
	I1014 19:46:52.067069  443658 ssh_runner.go:195] Run: cat /version.json
	I1014 19:46:52.067102  443658 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 19:46:52.067120  443658 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-744288
	I1014 19:46:52.067171  443658 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-744288
	I1014 19:46:52.086721  443658 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/functional-744288/id_rsa Username:docker}
	I1014 19:46:52.087447  443658 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/functional-744288/id_rsa Username:docker}
	I1014 19:46:52.242329  443658 ssh_runner.go:195] Run: systemctl --version
	I1014 19:46:52.249118  443658 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 19:46:52.286245  443658 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 19:46:52.291299  443658 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 19:46:52.291349  443658 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 19:46:52.300635  443658 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1014 19:46:52.300652  443658 start.go:495] detecting cgroup driver to use...
	I1014 19:46:52.300686  443658 detect.go:190] detected "systemd" cgroup driver on host os
	I1014 19:46:52.300736  443658 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 19:46:52.316275  443658 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 19:46:52.329801  443658 docker.go:218] disabling cri-docker service (if available) ...
	I1014 19:46:52.329853  443658 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 19:46:52.346243  443658 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 19:46:52.359490  443658 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 19:46:52.447197  443658 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 19:46:52.538861  443658 docker.go:234] disabling docker service ...
	I1014 19:46:52.538916  443658 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 19:46:52.553930  443658 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 19:46:52.567369  443658 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 19:46:52.660956  443658 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 19:46:52.750890  443658 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 19:46:52.763838  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 19:46:52.778079  443658 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1014 19:46:52.778155  443658 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 19:46:52.787486  443658 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1014 19:46:52.787547  443658 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 19:46:52.796683  443658 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 19:46:52.805576  443658 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 19:46:52.814550  443658 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 19:46:52.822996  443658 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 19:46:52.831895  443658 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 19:46:52.840774  443658 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 19:46:52.850651  443658 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 19:46:52.859313  443658 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 19:46:52.867538  443658 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 19:46:52.962127  443658 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 19:46:53.076386  443658 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 19:46:53.076443  443658 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 19:46:53.080594  443658 start.go:563] Will wait 60s for crictl version
	I1014 19:46:53.080668  443658 ssh_runner.go:195] Run: which crictl
	I1014 19:46:53.084304  443658 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1014 19:46:53.109208  443658 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1014 19:46:53.109281  443658 ssh_runner.go:195] Run: crio --version
	I1014 19:46:53.138035  443658 ssh_runner.go:195] Run: crio --version
	I1014 19:46:53.168844  443658 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1014 19:46:53.170307  443658 cli_runner.go:164] Run: docker network inspect functional-744288 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1014 19:46:53.187885  443658 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1014 19:46:53.194070  443658 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1014 19:46:53.195672  443658 kubeadm.go:883] updating cluster {Name:functional-744288 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-744288 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 19:46:53.195871  443658 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 19:46:53.195945  443658 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 19:46:53.228563  443658 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 19:46:53.228574  443658 crio.go:433] Images already preloaded, skipping extraction
	I1014 19:46:53.228622  443658 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 19:46:53.254361  443658 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 19:46:53.254375  443658 cache_images.go:85] Images are preloaded, skipping loading
	I1014 19:46:53.254381  443658 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1014 19:46:53.254470  443658 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-744288 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-744288 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 19:46:53.254527  443658 ssh_runner.go:195] Run: crio config
	I1014 19:46:53.300404  443658 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1014 19:46:53.300426  443658 cni.go:84] Creating CNI manager for ""
	I1014 19:46:53.300433  443658 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1014 19:46:53.300444  443658 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1014 19:46:53.300495  443658 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-744288 NodeName:functional-744288 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map
[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1014 19:46:53.300616  443658 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-744288"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 19:46:53.300679  443658 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1014 19:46:53.309514  443658 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 19:46:53.309583  443658 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1014 19:46:53.317487  443658 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1014 19:46:53.330167  443658 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 19:46:53.343013  443658 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2063 bytes)
	I1014 19:46:53.355344  443658 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1014 19:46:53.359037  443658 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 19:46:53.444644  443658 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 19:46:53.458036  443658 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288 for IP: 192.168.49.2
	I1014 19:46:53.458048  443658 certs.go:195] generating shared ca certs ...
	I1014 19:46:53.458069  443658 certs.go:227] acquiring lock for ca certs: {Name:mk332cc83f1e1747534fc81e896bed94f24941d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 19:46:53.458227  443658 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.key
	I1014 19:46:53.458260  443658 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.key
	I1014 19:46:53.458267  443658 certs.go:257] generating profile certs ...
	I1014 19:46:53.458335  443658 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/client.key
	I1014 19:46:53.458371  443658 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/apiserver.key.d065d9e2
	I1014 19:46:53.458404  443658 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/proxy-client.key
	I1014 19:46:53.458496  443658 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/417373.pem (1338 bytes)
	W1014 19:46:53.458520  443658 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-413763/.minikube/certs/417373_empty.pem, impossibly tiny 0 bytes
	I1014 19:46:53.458525  443658 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca-key.pem (1675 bytes)
	I1014 19:46:53.458546  443658 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem (1078 bytes)
	I1014 19:46:53.458563  443658 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem (1123 bytes)
	I1014 19:46:53.458578  443658 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem (1675 bytes)
	I1014 19:46:53.458610  443658 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem (1708 bytes)
	I1014 19:46:53.459307  443658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 19:46:53.477414  443658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1014 19:46:53.495270  443658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 19:46:53.512555  443658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1014 19:46:53.529773  443658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1014 19:46:53.546789  443658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1014 19:46:53.564254  443658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 19:46:53.581817  443658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1014 19:46:53.599895  443658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 19:46:53.617446  443658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/certs/417373.pem --> /usr/share/ca-certificates/417373.pem (1338 bytes)
	I1014 19:46:53.635253  443658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem --> /usr/share/ca-certificates/4173732.pem (1708 bytes)
	I1014 19:46:53.652640  443658 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 19:46:53.665679  443658 ssh_runner.go:195] Run: openssl version
	I1014 19:46:53.672008  443658 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 19:46:53.680614  443658 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 19:46:53.684470  443658 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 19:15 /usr/share/ca-certificates/minikubeCA.pem
	I1014 19:46:53.684516  443658 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 19:46:53.719901  443658 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 19:46:53.728850  443658 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/417373.pem && ln -fs /usr/share/ca-certificates/417373.pem /etc/ssl/certs/417373.pem"
	I1014 19:46:53.737556  443658 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/417373.pem
	I1014 19:46:53.741417  443658 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 19:32 /usr/share/ca-certificates/417373.pem
	I1014 19:46:53.741461  443658 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/417373.pem
	I1014 19:46:53.776307  443658 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/417373.pem /etc/ssl/certs/51391683.0"
	I1014 19:46:53.785236  443658 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4173732.pem && ln -fs /usr/share/ca-certificates/4173732.pem /etc/ssl/certs/4173732.pem"
	I1014 19:46:53.794084  443658 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4173732.pem
	I1014 19:46:53.797892  443658 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 19:32 /usr/share/ca-certificates/4173732.pem
	I1014 19:46:53.797948  443658 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4173732.pem
	I1014 19:46:53.834593  443658 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4173732.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 19:46:53.844414  443658 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 19:46:53.848749  443658 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1014 19:46:53.887194  443658 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1014 19:46:53.922606  443658 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1014 19:46:53.957478  443658 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1014 19:46:53.992284  443658 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1014 19:46:54.027831  443658 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1014 19:46:54.062500  443658 kubeadm.go:400] StartCluster: {Name:functional-744288 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-744288 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 19:46:54.062581  443658 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 19:46:54.062679  443658 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 19:46:54.091036  443658 cri.go:89] found id: ""
	I1014 19:46:54.091100  443658 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 19:46:54.099853  443658 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1014 19:46:54.099866  443658 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1014 19:46:54.099936  443658 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1014 19:46:54.108263  443658 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1014 19:46:54.108959  443658 kubeconfig.go:125] found "functional-744288" server: "https://192.168.49.2:8441"
	I1014 19:46:54.110744  443658 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1014 19:46:54.119142  443658 kubeadm.go:644] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-10-14 19:32:19.540090301 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-10-14 19:46:53.353553179 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1014 19:46:54.119152  443658 kubeadm.go:1160] stopping kube-system containers ...
	I1014 19:46:54.119166  443658 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1014 19:46:54.119218  443658 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 19:46:54.148301  443658 cri.go:89] found id: ""
	I1014 19:46:54.148360  443658 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1014 19:46:54.184714  443658 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 19:46:54.193363  443658 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5631 Oct 14 19:36 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5640 Oct 14 19:36 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5676 Oct 14 19:36 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Oct 14 19:36 /etc/kubernetes/scheduler.conf
	
	I1014 19:46:54.193426  443658 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1014 19:46:54.201562  443658 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1014 19:46:54.209606  443658 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1014 19:46:54.209663  443658 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 19:46:54.217395  443658 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1014 19:46:54.225064  443658 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1014 19:46:54.225124  443658 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 19:46:54.232906  443658 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1014 19:46:54.240872  443658 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1014 19:46:54.240946  443658 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 19:46:54.249061  443658 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 19:46:54.257286  443658 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 19:46:54.300108  443658 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 19:46:55.343385  443658 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.043246412s)
	I1014 19:46:55.343447  443658 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1014 19:46:55.525076  443658 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 19:46:55.576109  443658 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1014 19:46:55.627520  443658 api_server.go:52] waiting for apiserver process to appear ...
	I1014 19:46:55.627605  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:46:56.127985  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:46:56.627838  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:46:57.127896  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:46:57.627665  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:46:58.127984  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:46:58.627867  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:46:59.127900  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:46:59.628123  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:00.128625  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:00.627821  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:01.128624  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:01.628023  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:02.127948  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:02.627921  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:03.127948  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:03.628734  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:04.128392  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:04.628537  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:05.128064  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:05.628802  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:06.128694  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:06.628003  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:07.128400  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:07.628401  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:08.127838  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:08.628730  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:09.128120  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:09.628353  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:10.128434  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:10.628596  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:11.128581  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:11.627793  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:12.127961  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:12.628351  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:13.128116  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:13.627994  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:14.128426  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:14.628582  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:15.127702  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:15.628620  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:16.128507  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:16.628503  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:17.128107  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:17.628228  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:18.128362  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:18.628356  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:19.127920  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:19.628163  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:20.128061  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:20.628781  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:21.127881  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:21.628577  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:22.128659  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:22.628134  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:23.128128  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:23.627880  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:24.128119  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:24.627778  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:25.127863  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:25.628390  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:26.127929  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:26.627912  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:27.128042  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:27.628342  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:28.128494  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:28.628349  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:29.128156  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:29.628040  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:30.127990  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:30.627843  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:31.128015  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:31.627940  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:32.127940  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:32.628112  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:33.127960  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:33.627881  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:34.128093  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:34.628548  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:35.128447  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:35.628084  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:36.128068  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:36.628232  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:37.127674  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:37.627888  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:38.127934  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:38.627918  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:39.127805  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:39.628511  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:40.127885  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:40.628201  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:41.128746  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:41.627723  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:42.127816  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:42.628553  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:43.128336  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:43.628428  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:44.128606  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:44.628579  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:45.128728  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:45.628365  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:46.127990  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:46.628044  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:47.127727  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:47.628173  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:48.128160  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:48.627943  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:49.128276  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:49.628454  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:50.127829  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:50.628280  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:51.127982  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:51.628287  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:52.128593  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:52.627776  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:53.127784  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:53.628593  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:54.127690  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:54.627941  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:55.128160  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:55.628161  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:47:55.628261  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:47:55.656679  443658 cri.go:89] found id: ""
	I1014 19:47:55.656706  443658 logs.go:282] 0 containers: []
	W1014 19:47:55.656717  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:47:55.656725  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:47:55.656807  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:47:55.684574  443658 cri.go:89] found id: ""
	I1014 19:47:55.684594  443658 logs.go:282] 0 containers: []
	W1014 19:47:55.684602  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:47:55.684607  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:47:55.684669  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:47:55.711291  443658 cri.go:89] found id: ""
	I1014 19:47:55.711309  443658 logs.go:282] 0 containers: []
	W1014 19:47:55.711316  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:47:55.711321  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:47:55.711376  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:47:55.738652  443658 cri.go:89] found id: ""
	I1014 19:47:55.738669  443658 logs.go:282] 0 containers: []
	W1014 19:47:55.738678  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:47:55.738690  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:47:55.738752  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:47:55.765191  443658 cri.go:89] found id: ""
	I1014 19:47:55.765208  443658 logs.go:282] 0 containers: []
	W1014 19:47:55.765215  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:47:55.765220  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:47:55.765267  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:47:55.791406  443658 cri.go:89] found id: ""
	I1014 19:47:55.791425  443658 logs.go:282] 0 containers: []
	W1014 19:47:55.791433  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:47:55.791438  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:47:55.791483  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:47:55.817705  443658 cri.go:89] found id: ""
	I1014 19:47:55.817724  443658 logs.go:282] 0 containers: []
	W1014 19:47:55.817732  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:47:55.817741  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:47:55.817787  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:47:55.885166  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:47:55.885191  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:47:55.903388  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:47:55.903408  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:47:55.962011  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:47:55.955051    6730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:47:55.955898    6730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:47:55.957465    6730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:47:55.957907    6730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:47:55.958999    6730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:47:55.955051    6730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:47:55.955898    6730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:47:55.957465    6730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:47:55.957907    6730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:47:55.958999    6730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:47:55.962024  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:47:55.962036  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:47:56.023614  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:47:56.023639  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:47:58.556015  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:58.567258  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:47:58.567330  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:47:58.593588  443658 cri.go:89] found id: ""
	I1014 19:47:58.593606  443658 logs.go:282] 0 containers: []
	W1014 19:47:58.593613  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:47:58.593618  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:47:58.593686  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:47:58.621667  443658 cri.go:89] found id: ""
	I1014 19:47:58.621687  443658 logs.go:282] 0 containers: []
	W1014 19:47:58.621694  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:47:58.621699  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:47:58.621753  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:47:58.648823  443658 cri.go:89] found id: ""
	I1014 19:47:58.648841  443658 logs.go:282] 0 containers: []
	W1014 19:47:58.648851  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:47:58.648858  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:47:58.648920  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:47:58.675986  443658 cri.go:89] found id: ""
	I1014 19:47:58.676007  443658 logs.go:282] 0 containers: []
	W1014 19:47:58.676017  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:47:58.676024  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:47:58.676074  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:47:58.703476  443658 cri.go:89] found id: ""
	I1014 19:47:58.703492  443658 logs.go:282] 0 containers: []
	W1014 19:47:58.703499  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:47:58.703504  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:47:58.703553  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:47:58.732093  443658 cri.go:89] found id: ""
	I1014 19:47:58.732116  443658 logs.go:282] 0 containers: []
	W1014 19:47:58.732127  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:47:58.732133  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:47:58.732188  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:47:58.759813  443658 cri.go:89] found id: ""
	I1014 19:47:58.759832  443658 logs.go:282] 0 containers: []
	W1014 19:47:58.759839  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:47:58.759848  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:47:58.759858  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:47:58.829913  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:47:58.829936  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:47:58.848245  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:47:58.848269  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:47:58.907295  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:47:58.900510    6852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:47:58.901027    6852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:47:58.902546    6852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:47:58.903012    6852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:47:58.904214    6852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:47:58.900510    6852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:47:58.901027    6852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:47:58.902546    6852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:47:58.903012    6852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:47:58.904214    6852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:47:58.907316  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:47:58.907329  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:47:58.971553  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:47:58.971576  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:48:01.502989  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:48:01.514422  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:48:01.514481  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:48:01.541083  443658 cri.go:89] found id: ""
	I1014 19:48:01.541099  443658 logs.go:282] 0 containers: []
	W1014 19:48:01.541107  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:48:01.541113  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:48:01.541166  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:48:01.568411  443658 cri.go:89] found id: ""
	I1014 19:48:01.568430  443658 logs.go:282] 0 containers: []
	W1014 19:48:01.568438  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:48:01.568443  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:48:01.568507  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:48:01.596626  443658 cri.go:89] found id: ""
	I1014 19:48:01.596643  443658 logs.go:282] 0 containers: []
	W1014 19:48:01.596651  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:48:01.596656  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:48:01.596709  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:48:01.625098  443658 cri.go:89] found id: ""
	I1014 19:48:01.625114  443658 logs.go:282] 0 containers: []
	W1014 19:48:01.625121  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:48:01.625126  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:48:01.625175  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:48:01.652267  443658 cri.go:89] found id: ""
	I1014 19:48:01.652287  443658 logs.go:282] 0 containers: []
	W1014 19:48:01.652296  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:48:01.652302  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:48:01.652369  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:48:01.680110  443658 cri.go:89] found id: ""
	I1014 19:48:01.680126  443658 logs.go:282] 0 containers: []
	W1014 19:48:01.680132  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:48:01.680137  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:48:01.680183  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:48:01.706650  443658 cri.go:89] found id: ""
	I1014 19:48:01.706673  443658 logs.go:282] 0 containers: []
	W1014 19:48:01.706682  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:48:01.706692  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:48:01.706703  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:48:01.777579  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:48:01.777603  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:48:01.796141  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:48:01.796160  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:48:01.854657  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:48:01.848022    6974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:01.848515    6974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:01.850053    6974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:01.850582    6974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:01.851657    6974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:48:01.848022    6974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:01.848515    6974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:01.850053    6974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:01.850582    6974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:01.851657    6974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:48:01.854673  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:48:01.854688  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:48:01.921567  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:48:01.921605  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:48:04.454355  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:48:04.465748  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:48:04.465834  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:48:04.493735  443658 cri.go:89] found id: ""
	I1014 19:48:04.493752  443658 logs.go:282] 0 containers: []
	W1014 19:48:04.493773  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:48:04.493780  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:48:04.493837  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:48:04.520295  443658 cri.go:89] found id: ""
	I1014 19:48:04.520313  443658 logs.go:282] 0 containers: []
	W1014 19:48:04.520321  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:48:04.520325  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:48:04.520380  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:48:04.547856  443658 cri.go:89] found id: ""
	I1014 19:48:04.547880  443658 logs.go:282] 0 containers: []
	W1014 19:48:04.547891  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:48:04.547898  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:48:04.547963  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:48:04.574029  443658 cri.go:89] found id: ""
	I1014 19:48:04.574047  443658 logs.go:282] 0 containers: []
	W1014 19:48:04.574055  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:48:04.574059  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:48:04.574111  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:48:04.600612  443658 cri.go:89] found id: ""
	I1014 19:48:04.600635  443658 logs.go:282] 0 containers: []
	W1014 19:48:04.600643  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:48:04.600648  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:48:04.600710  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:48:04.627768  443658 cri.go:89] found id: ""
	I1014 19:48:04.627787  443658 logs.go:282] 0 containers: []
	W1014 19:48:04.627796  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:48:04.627803  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:48:04.627868  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:48:04.654609  443658 cri.go:89] found id: ""
	I1014 19:48:04.654626  443658 logs.go:282] 0 containers: []
	W1014 19:48:04.654633  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:48:04.654641  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:48:04.654666  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:48:04.723997  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:48:04.724022  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:48:04.742117  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:48:04.742138  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:48:04.800762  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:48:04.793052    7104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:04.793685    7104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:04.795214    7104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:04.795736    7104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:04.797328    7104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:48:04.793052    7104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:04.793685    7104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:04.795214    7104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:04.795736    7104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:04.797328    7104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:48:04.800782  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:48:04.800797  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:48:04.865079  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:48:04.865104  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:48:07.397466  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:48:07.409124  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:48:07.409189  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:48:07.436009  443658 cri.go:89] found id: ""
	I1014 19:48:07.436030  443658 logs.go:282] 0 containers: []
	W1014 19:48:07.436039  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:48:07.436045  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:48:07.436092  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:48:07.463450  443658 cri.go:89] found id: ""
	I1014 19:48:07.463467  443658 logs.go:282] 0 containers: []
	W1014 19:48:07.463474  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:48:07.463479  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:48:07.463538  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:48:07.489350  443658 cri.go:89] found id: ""
	I1014 19:48:07.489367  443658 logs.go:282] 0 containers: []
	W1014 19:48:07.489373  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:48:07.489379  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:48:07.489423  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:48:07.516187  443658 cri.go:89] found id: ""
	I1014 19:48:07.516205  443658 logs.go:282] 0 containers: []
	W1014 19:48:07.516212  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:48:07.516217  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:48:07.516266  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:48:07.544147  443658 cri.go:89] found id: ""
	I1014 19:48:07.544163  443658 logs.go:282] 0 containers: []
	W1014 19:48:07.544171  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:48:07.544178  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:48:07.544232  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:48:07.570956  443658 cri.go:89] found id: ""
	I1014 19:48:07.570987  443658 logs.go:282] 0 containers: []
	W1014 19:48:07.570997  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:48:07.571004  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:48:07.571055  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:48:07.599057  443658 cri.go:89] found id: ""
	I1014 19:48:07.599075  443658 logs.go:282] 0 containers: []
	W1014 19:48:07.599083  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:48:07.599091  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:48:07.599102  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:48:07.629352  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:48:07.629386  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:48:07.696795  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:48:07.696819  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:48:07.714841  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:48:07.714863  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:48:07.773003  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:48:07.765637    7253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:07.766223    7253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:07.767815    7253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:07.768258    7253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:07.769624    7253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:48:07.765637    7253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:07.766223    7253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:07.767815    7253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:07.768258    7253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:07.769624    7253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:48:07.773022  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:48:07.773036  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:48:10.338910  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:48:10.350323  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:48:10.350379  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:48:10.377858  443658 cri.go:89] found id: ""
	I1014 19:48:10.377875  443658 logs.go:282] 0 containers: []
	W1014 19:48:10.377882  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:48:10.377886  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:48:10.377938  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:48:10.404249  443658 cri.go:89] found id: ""
	I1014 19:48:10.404265  443658 logs.go:282] 0 containers: []
	W1014 19:48:10.404272  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:48:10.404277  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:48:10.404326  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:48:10.432298  443658 cri.go:89] found id: ""
	I1014 19:48:10.432315  443658 logs.go:282] 0 containers: []
	W1014 19:48:10.432322  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:48:10.432328  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:48:10.432377  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:48:10.458476  443658 cri.go:89] found id: ""
	I1014 19:48:10.458495  443658 logs.go:282] 0 containers: []
	W1014 19:48:10.458501  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:48:10.458507  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:48:10.458552  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:48:10.486998  443658 cri.go:89] found id: ""
	I1014 19:48:10.487017  443658 logs.go:282] 0 containers: []
	W1014 19:48:10.487024  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:48:10.487029  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:48:10.487075  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:48:10.514207  443658 cri.go:89] found id: ""
	I1014 19:48:10.514223  443658 logs.go:282] 0 containers: []
	W1014 19:48:10.514230  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:48:10.514235  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:48:10.514285  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:48:10.541589  443658 cri.go:89] found id: ""
	I1014 19:48:10.541604  443658 logs.go:282] 0 containers: []
	W1014 19:48:10.541610  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:48:10.541618  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:48:10.541630  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:48:10.608114  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:48:10.608140  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:48:10.627515  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:48:10.627537  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:48:10.687776  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:48:10.680118    7357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:10.680631    7357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:10.682237    7357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:10.682859    7357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:10.684410    7357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:48:10.680118    7357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:10.680631    7357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:10.682237    7357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:10.682859    7357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:10.684410    7357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:48:10.687790  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:48:10.687805  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:48:10.752090  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:48:10.752115  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:48:13.282895  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:48:13.294310  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:48:13.294364  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:48:13.321971  443658 cri.go:89] found id: ""
	I1014 19:48:13.321990  443658 logs.go:282] 0 containers: []
	W1014 19:48:13.321999  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:48:13.322005  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:48:13.322054  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:48:13.349696  443658 cri.go:89] found id: ""
	I1014 19:48:13.349717  443658 logs.go:282] 0 containers: []
	W1014 19:48:13.349727  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:48:13.349734  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:48:13.349809  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:48:13.375640  443658 cri.go:89] found id: ""
	I1014 19:48:13.375658  443658 logs.go:282] 0 containers: []
	W1014 19:48:13.375664  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:48:13.375669  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:48:13.375723  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:48:13.401774  443658 cri.go:89] found id: ""
	I1014 19:48:13.401795  443658 logs.go:282] 0 containers: []
	W1014 19:48:13.401805  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:48:13.401810  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:48:13.401857  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:48:13.428959  443658 cri.go:89] found id: ""
	I1014 19:48:13.428976  443658 logs.go:282] 0 containers: []
	W1014 19:48:13.428983  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:48:13.428988  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:48:13.429047  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:48:13.457247  443658 cri.go:89] found id: ""
	I1014 19:48:13.457264  443658 logs.go:282] 0 containers: []
	W1014 19:48:13.457271  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:48:13.457276  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:48:13.457324  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:48:13.483816  443658 cri.go:89] found id: ""
	I1014 19:48:13.483834  443658 logs.go:282] 0 containers: []
	W1014 19:48:13.483841  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:48:13.483849  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:48:13.483860  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:48:13.551788  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:48:13.551811  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:48:13.569457  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:48:13.569478  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:48:13.627267  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:48:13.619783    7474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:13.620394    7474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:13.621969    7474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:13.622387    7474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:13.623926    7474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:48:13.619783    7474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:13.620394    7474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:13.621969    7474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:13.622387    7474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:13.623926    7474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:48:13.627279  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:48:13.627289  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:48:13.691177  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:48:13.691201  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:48:16.221827  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:48:16.233209  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:48:16.233277  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:48:16.259929  443658 cri.go:89] found id: ""
	I1014 19:48:16.259948  443658 logs.go:282] 0 containers: []
	W1014 19:48:16.259959  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:48:16.259966  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:48:16.260018  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:48:16.287292  443658 cri.go:89] found id: ""
	I1014 19:48:16.287310  443658 logs.go:282] 0 containers: []
	W1014 19:48:16.287318  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:48:16.287326  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:48:16.287381  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:48:16.314495  443658 cri.go:89] found id: ""
	I1014 19:48:16.314516  443658 logs.go:282] 0 containers: []
	W1014 19:48:16.314525  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:48:16.314531  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:48:16.314602  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:48:16.340741  443658 cri.go:89] found id: ""
	I1014 19:48:16.340772  443658 logs.go:282] 0 containers: []
	W1014 19:48:16.340785  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:48:16.340791  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:48:16.340839  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:48:16.368210  443658 cri.go:89] found id: ""
	I1014 19:48:16.368225  443658 logs.go:282] 0 containers: []
	W1014 19:48:16.368233  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:48:16.368239  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:48:16.368289  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:48:16.394831  443658 cri.go:89] found id: ""
	I1014 19:48:16.394848  443658 logs.go:282] 0 containers: []
	W1014 19:48:16.394858  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:48:16.394865  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:48:16.394922  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:48:16.421594  443658 cri.go:89] found id: ""
	I1014 19:48:16.421614  443658 logs.go:282] 0 containers: []
	W1014 19:48:16.421622  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:48:16.421631  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:48:16.421641  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:48:16.491514  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:48:16.491538  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:48:16.509528  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:48:16.509549  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:48:16.567026  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:48:16.559396    7591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:16.560067    7591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:16.561808    7591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:16.562264    7591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:16.563791    7591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:48:16.559396    7591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:16.560067    7591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:16.561808    7591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:16.562264    7591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:16.563791    7591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:48:16.567039  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:48:16.567050  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:48:16.633705  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:48:16.633729  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:48:19.170176  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:48:19.181543  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:48:19.181597  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:48:19.207369  443658 cri.go:89] found id: ""
	I1014 19:48:19.207386  443658 logs.go:282] 0 containers: []
	W1014 19:48:19.207392  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:48:19.207397  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:48:19.207441  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:48:19.233860  443658 cri.go:89] found id: ""
	I1014 19:48:19.233881  443658 logs.go:282] 0 containers: []
	W1014 19:48:19.233890  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:48:19.233896  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:48:19.233956  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:48:19.260261  443658 cri.go:89] found id: ""
	I1014 19:48:19.260279  443658 logs.go:282] 0 containers: []
	W1014 19:48:19.260287  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:48:19.260293  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:48:19.260346  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:48:19.287494  443658 cri.go:89] found id: ""
	I1014 19:48:19.287515  443658 logs.go:282] 0 containers: []
	W1014 19:48:19.287525  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:48:19.287532  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:48:19.287584  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:48:19.313774  443658 cri.go:89] found id: ""
	I1014 19:48:19.313792  443658 logs.go:282] 0 containers: []
	W1014 19:48:19.313798  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:48:19.313803  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:48:19.313860  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:48:19.340266  443658 cri.go:89] found id: ""
	I1014 19:48:19.340286  443658 logs.go:282] 0 containers: []
	W1014 19:48:19.340296  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:48:19.340305  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:48:19.340371  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:48:19.367478  443658 cri.go:89] found id: ""
	I1014 19:48:19.367494  443658 logs.go:282] 0 containers: []
	W1014 19:48:19.367501  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:48:19.367510  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:48:19.367519  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:48:19.434384  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:48:19.434408  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:48:19.453201  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:48:19.453221  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:48:19.511748  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:48:19.504301    7733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:19.504947    7733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:19.506543    7733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:19.506980    7733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:19.508451    7733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:48:19.504301    7733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:19.504947    7733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:19.506543    7733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:19.506980    7733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:19.508451    7733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:48:19.511771  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:48:19.511786  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:48:19.572669  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:48:19.572694  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:48:22.104359  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:48:22.116056  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:48:22.116114  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:48:22.143506  443658 cri.go:89] found id: ""
	I1014 19:48:22.143526  443658 logs.go:282] 0 containers: []
	W1014 19:48:22.143535  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:48:22.143542  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:48:22.143604  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:48:22.171275  443658 cri.go:89] found id: ""
	I1014 19:48:22.171293  443658 logs.go:282] 0 containers: []
	W1014 19:48:22.171300  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:48:22.171304  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:48:22.171354  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:48:22.200946  443658 cri.go:89] found id: ""
	I1014 19:48:22.200963  443658 logs.go:282] 0 containers: []
	W1014 19:48:22.200969  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:48:22.200975  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:48:22.201021  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:48:22.229821  443658 cri.go:89] found id: ""
	I1014 19:48:22.229838  443658 logs.go:282] 0 containers: []
	W1014 19:48:22.229848  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:48:22.229853  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:48:22.229908  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:48:22.257470  443658 cri.go:89] found id: ""
	I1014 19:48:22.257490  443658 logs.go:282] 0 containers: []
	W1014 19:48:22.257501  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:48:22.257507  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:48:22.257561  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:48:22.286561  443658 cri.go:89] found id: ""
	I1014 19:48:22.286582  443658 logs.go:282] 0 containers: []
	W1014 19:48:22.286590  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:48:22.286640  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:48:22.286708  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:48:22.314642  443658 cri.go:89] found id: ""
	I1014 19:48:22.314659  443658 logs.go:282] 0 containers: []
	W1014 19:48:22.314665  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:48:22.314673  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:48:22.314703  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:48:22.375334  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:48:22.367894    7851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:22.368440    7851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:22.370076    7851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:22.370561    7851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:22.372196    7851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:48:22.367894    7851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:22.368440    7851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:22.370076    7851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:22.370561    7851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:22.372196    7851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:48:22.375355  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:48:22.375369  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:48:22.437367  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:48:22.437393  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:48:22.467945  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:48:22.467963  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:48:22.538691  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:48:22.538715  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:48:25.057422  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:48:25.069417  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:48:25.069480  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:48:25.097308  443658 cri.go:89] found id: ""
	I1014 19:48:25.097327  443658 logs.go:282] 0 containers: []
	W1014 19:48:25.097334  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:48:25.097340  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:48:25.097399  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:48:25.124869  443658 cri.go:89] found id: ""
	I1014 19:48:25.124888  443658 logs.go:282] 0 containers: []
	W1014 19:48:25.124897  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:48:25.124902  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:48:25.124956  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:48:25.151745  443658 cri.go:89] found id: ""
	I1014 19:48:25.151777  443658 logs.go:282] 0 containers: []
	W1014 19:48:25.151788  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:48:25.151794  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:48:25.151851  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:48:25.178827  443658 cri.go:89] found id: ""
	I1014 19:48:25.178847  443658 logs.go:282] 0 containers: []
	W1014 19:48:25.178857  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:48:25.178864  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:48:25.178919  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:48:25.207030  443658 cri.go:89] found id: ""
	I1014 19:48:25.207048  443658 logs.go:282] 0 containers: []
	W1014 19:48:25.207055  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:48:25.207060  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:48:25.207115  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:48:25.234277  443658 cri.go:89] found id: ""
	I1014 19:48:25.234295  443658 logs.go:282] 0 containers: []
	W1014 19:48:25.234302  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:48:25.234307  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:48:25.234351  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:48:25.260062  443658 cri.go:89] found id: ""
	I1014 19:48:25.260079  443658 logs.go:282] 0 containers: []
	W1014 19:48:25.260085  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:48:25.260094  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:48:25.260105  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:48:25.328418  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:48:25.328443  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:48:25.346610  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:48:25.346630  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:48:25.405353  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:48:25.397912    7974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:25.398394    7974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:25.400014    7974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:25.400430    7974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:25.401975    7974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:48:25.397912    7974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:25.398394    7974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:25.400014    7974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:25.400430    7974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:25.401975    7974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:48:25.405366  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:48:25.405378  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:48:25.466377  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:48:25.466403  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:48:27.999561  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:48:28.010893  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:48:28.010948  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:48:28.037673  443658 cri.go:89] found id: ""
	I1014 19:48:28.037692  443658 logs.go:282] 0 containers: []
	W1014 19:48:28.037699  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:48:28.037720  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:48:28.037786  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:48:28.065810  443658 cri.go:89] found id: ""
	I1014 19:48:28.065828  443658 logs.go:282] 0 containers: []
	W1014 19:48:28.065835  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:48:28.065840  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:48:28.065891  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:48:28.093517  443658 cri.go:89] found id: ""
	I1014 19:48:28.093535  443658 logs.go:282] 0 containers: []
	W1014 19:48:28.093542  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:48:28.093547  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:48:28.093594  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:48:28.120885  443658 cri.go:89] found id: ""
	I1014 19:48:28.120907  443658 logs.go:282] 0 containers: []
	W1014 19:48:28.120917  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:48:28.120924  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:48:28.120991  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:48:28.151601  443658 cri.go:89] found id: ""
	I1014 19:48:28.151621  443658 logs.go:282] 0 containers: []
	W1014 19:48:28.151632  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:48:28.151677  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:48:28.151731  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:48:28.179686  443658 cri.go:89] found id: ""
	I1014 19:48:28.179707  443658 logs.go:282] 0 containers: []
	W1014 19:48:28.179718  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:48:28.179725  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:48:28.179796  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:48:28.207048  443658 cri.go:89] found id: ""
	I1014 19:48:28.207065  443658 logs.go:282] 0 containers: []
	W1014 19:48:28.207073  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:48:28.207081  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:48:28.207092  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:48:28.273826  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:48:28.273858  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:48:28.291974  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:48:28.291996  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:48:28.350599  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:48:28.343032    8094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:28.344089    8094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:28.344502    8094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:28.346102    8094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:28.346541    8094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:48:28.343032    8094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:28.344089    8094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:28.344502    8094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:28.346102    8094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:28.346541    8094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:48:28.350610  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:48:28.350620  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:48:28.412963  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:48:28.412999  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:48:30.943653  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:48:30.954861  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:48:30.954918  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:48:30.982663  443658 cri.go:89] found id: ""
	I1014 19:48:30.982687  443658 logs.go:282] 0 containers: []
	W1014 19:48:30.982697  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:48:30.982705  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:48:30.982790  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:48:31.010956  443658 cri.go:89] found id: ""
	I1014 19:48:31.010972  443658 logs.go:282] 0 containers: []
	W1014 19:48:31.010982  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:48:31.010988  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:48:31.011044  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:48:31.037820  443658 cri.go:89] found id: ""
	I1014 19:48:31.037835  443658 logs.go:282] 0 containers: []
	W1014 19:48:31.037845  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:48:31.037851  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:48:31.037908  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:48:31.064198  443658 cri.go:89] found id: ""
	I1014 19:48:31.064219  443658 logs.go:282] 0 containers: []
	W1014 19:48:31.064229  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:48:31.064237  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:48:31.064290  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:48:31.090978  443658 cri.go:89] found id: ""
	I1014 19:48:31.091014  443658 logs.go:282] 0 containers: []
	W1014 19:48:31.091025  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:48:31.091031  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:48:31.091085  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:48:31.119501  443658 cri.go:89] found id: ""
	I1014 19:48:31.119519  443658 logs.go:282] 0 containers: []
	W1014 19:48:31.119526  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:48:31.119531  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:48:31.119578  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:48:31.147180  443658 cri.go:89] found id: ""
	I1014 19:48:31.147202  443658 logs.go:282] 0 containers: []
	W1014 19:48:31.147212  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:48:31.147223  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:48:31.147235  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:48:31.215950  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:48:31.215975  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:48:31.234800  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:48:31.234824  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:48:31.293858  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:48:31.286222    8225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:31.286789    8225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:31.288416    8225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:31.288945    8225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:31.290474    8225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:48:31.286222    8225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:31.286789    8225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:31.288416    8225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:31.288945    8225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:31.290474    8225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:48:31.293875  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:48:31.293886  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:48:31.357651  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:48:31.357679  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:48:33.890973  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:48:33.903698  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:48:33.903750  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:48:33.930766  443658 cri.go:89] found id: ""
	I1014 19:48:33.930786  443658 logs.go:282] 0 containers: []
	W1014 19:48:33.930793  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:48:33.930798  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:48:33.930850  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:48:33.958613  443658 cri.go:89] found id: ""
	I1014 19:48:33.958634  443658 logs.go:282] 0 containers: []
	W1014 19:48:33.958644  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:48:33.958652  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:48:33.958714  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:48:33.985879  443658 cri.go:89] found id: ""
	I1014 19:48:33.985900  443658 logs.go:282] 0 containers: []
	W1014 19:48:33.985908  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:48:33.985913  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:48:33.985969  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:48:34.014311  443658 cri.go:89] found id: ""
	I1014 19:48:34.014330  443658 logs.go:282] 0 containers: []
	W1014 19:48:34.014338  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:48:34.014344  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:48:34.014406  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:48:34.042331  443658 cri.go:89] found id: ""
	I1014 19:48:34.042352  443658 logs.go:282] 0 containers: []
	W1014 19:48:34.042361  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:48:34.042369  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:48:34.042432  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:48:34.070428  443658 cri.go:89] found id: ""
	I1014 19:48:34.070446  443658 logs.go:282] 0 containers: []
	W1014 19:48:34.070456  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:48:34.070463  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:48:34.070517  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:48:34.097884  443658 cri.go:89] found id: ""
	I1014 19:48:34.097903  443658 logs.go:282] 0 containers: []
	W1014 19:48:34.097921  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:48:34.097931  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:48:34.097948  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:48:34.157332  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:48:34.149617    8349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:34.150366    8349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:34.152026    8349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:34.152566    8349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:34.153919    8349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:48:34.149617    8349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:34.150366    8349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:34.152026    8349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:34.152566    8349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:34.153919    8349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:48:34.157346  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:48:34.157361  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:48:34.220371  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:48:34.220398  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:48:34.250307  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:48:34.250325  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:48:34.315972  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:48:34.315994  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:48:36.835436  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:48:36.846681  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:48:36.846733  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:48:36.873365  443658 cri.go:89] found id: ""
	I1014 19:48:36.873381  443658 logs.go:282] 0 containers: []
	W1014 19:48:36.873389  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:48:36.873394  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:48:36.873447  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:48:36.900441  443658 cri.go:89] found id: ""
	I1014 19:48:36.900458  443658 logs.go:282] 0 containers: []
	W1014 19:48:36.900464  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:48:36.900469  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:48:36.900528  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:48:36.928334  443658 cri.go:89] found id: ""
	I1014 19:48:36.928352  443658 logs.go:282] 0 containers: []
	W1014 19:48:36.928359  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:48:36.928364  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:48:36.928432  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:48:36.955215  443658 cri.go:89] found id: ""
	I1014 19:48:36.955234  443658 logs.go:282] 0 containers: []
	W1014 19:48:36.955244  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:48:36.955249  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:48:36.955304  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:48:36.982183  443658 cri.go:89] found id: ""
	I1014 19:48:36.982201  443658 logs.go:282] 0 containers: []
	W1014 19:48:36.982208  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:48:36.982213  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:48:36.982270  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:48:37.009766  443658 cri.go:89] found id: ""
	I1014 19:48:37.009788  443658 logs.go:282] 0 containers: []
	W1014 19:48:37.009798  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:48:37.009803  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:48:37.009852  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:48:37.036432  443658 cri.go:89] found id: ""
	I1014 19:48:37.036454  443658 logs.go:282] 0 containers: []
	W1014 19:48:37.036464  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:48:37.036474  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:48:37.036484  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:48:37.101021  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:48:37.101045  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:48:37.132706  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:48:37.132724  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:48:37.200337  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:48:37.200365  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:48:37.218525  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:48:37.218545  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:48:37.279294  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:48:37.271380    8491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:37.272016    8491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:37.273706    8491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:37.274226    8491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:37.275831    8491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:48:37.271380    8491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:37.272016    8491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:37.273706    8491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:37.274226    8491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:37.275831    8491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:48:39.779639  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:48:39.791242  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:48:39.791305  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:48:39.817960  443658 cri.go:89] found id: ""
	I1014 19:48:39.817977  443658 logs.go:282] 0 containers: []
	W1014 19:48:39.817984  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:48:39.817989  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:48:39.818038  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:48:39.845643  443658 cri.go:89] found id: ""
	I1014 19:48:39.845661  443658 logs.go:282] 0 containers: []
	W1014 19:48:39.845668  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:48:39.845673  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:48:39.845724  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:48:39.872711  443658 cri.go:89] found id: ""
	I1014 19:48:39.872727  443658 logs.go:282] 0 containers: []
	W1014 19:48:39.872734  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:48:39.872738  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:48:39.872815  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:48:39.900683  443658 cri.go:89] found id: ""
	I1014 19:48:39.900705  443658 logs.go:282] 0 containers: []
	W1014 19:48:39.900714  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:48:39.900719  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:48:39.900807  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:48:39.929509  443658 cri.go:89] found id: ""
	I1014 19:48:39.929529  443658 logs.go:282] 0 containers: []
	W1014 19:48:39.929540  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:48:39.929546  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:48:39.929599  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:48:39.955582  443658 cri.go:89] found id: ""
	I1014 19:48:39.955598  443658 logs.go:282] 0 containers: []
	W1014 19:48:39.955605  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:48:39.955610  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:48:39.955657  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:48:39.983710  443658 cri.go:89] found id: ""
	I1014 19:48:39.983727  443658 logs.go:282] 0 containers: []
	W1014 19:48:39.983736  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:48:39.983744  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:48:39.983782  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:48:40.052784  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:48:40.052811  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:48:40.070963  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:48:40.070983  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:48:40.129639  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:48:40.122787    8591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:40.123371    8591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:40.124932    8591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:40.125359    8591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:40.126495    8591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:48:40.122787    8591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:40.123371    8591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:40.124932    8591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:40.125359    8591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:40.126495    8591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:48:40.129685  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:48:40.129697  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:48:40.191333  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:48:40.191359  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:48:42.723817  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:48:42.735282  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:48:42.735333  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:48:42.762376  443658 cri.go:89] found id: ""
	I1014 19:48:42.762395  443658 logs.go:282] 0 containers: []
	W1014 19:48:42.762402  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:48:42.762407  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:48:42.762455  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:48:42.789118  443658 cri.go:89] found id: ""
	I1014 19:48:42.789136  443658 logs.go:282] 0 containers: []
	W1014 19:48:42.789142  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:48:42.789147  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:48:42.789194  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:48:42.816692  443658 cri.go:89] found id: ""
	I1014 19:48:42.816709  443658 logs.go:282] 0 containers: []
	W1014 19:48:42.816717  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:48:42.816721  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:48:42.816787  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:48:42.844094  443658 cri.go:89] found id: ""
	I1014 19:48:42.844111  443658 logs.go:282] 0 containers: []
	W1014 19:48:42.844117  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:48:42.844122  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:48:42.844169  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:48:42.871946  443658 cri.go:89] found id: ""
	I1014 19:48:42.871964  443658 logs.go:282] 0 containers: []
	W1014 19:48:42.871971  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:48:42.871975  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:48:42.872038  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:48:42.899614  443658 cri.go:89] found id: ""
	I1014 19:48:42.899632  443658 logs.go:282] 0 containers: []
	W1014 19:48:42.899638  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:48:42.899643  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:48:42.899689  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:48:42.927253  443658 cri.go:89] found id: ""
	I1014 19:48:42.927269  443658 logs.go:282] 0 containers: []
	W1014 19:48:42.927277  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:48:42.927285  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:48:42.927301  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:48:42.994077  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:48:42.994105  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:48:43.012747  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:48:43.012777  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:48:43.071125  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:48:43.063880    8721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:43.064444    8721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:43.066049    8721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:43.066536    8721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:43.068056    8721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:48:43.063880    8721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:43.064444    8721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:43.066049    8721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:43.066536    8721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:43.068056    8721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:48:43.071145  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:48:43.071157  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:48:43.136102  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:48:43.136125  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:48:45.668732  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:48:45.679980  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:48:45.680041  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:48:45.708000  443658 cri.go:89] found id: ""
	I1014 19:48:45.708030  443658 logs.go:282] 0 containers: []
	W1014 19:48:45.708040  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:48:45.708046  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:48:45.708093  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:48:45.736452  443658 cri.go:89] found id: ""
	I1014 19:48:45.736530  443658 logs.go:282] 0 containers: []
	W1014 19:48:45.736542  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:48:45.736548  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:48:45.736603  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:48:45.764163  443658 cri.go:89] found id: ""
	I1014 19:48:45.764184  443658 logs.go:282] 0 containers: []
	W1014 19:48:45.764194  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:48:45.764201  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:48:45.764259  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:48:45.791827  443658 cri.go:89] found id: ""
	I1014 19:48:45.791842  443658 logs.go:282] 0 containers: []
	W1014 19:48:45.791848  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:48:45.791854  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:48:45.791912  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:48:45.819509  443658 cri.go:89] found id: ""
	I1014 19:48:45.819529  443658 logs.go:282] 0 containers: []
	W1014 19:48:45.819540  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:48:45.819547  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:48:45.819609  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:48:45.847227  443658 cri.go:89] found id: ""
	I1014 19:48:45.847248  443658 logs.go:282] 0 containers: []
	W1014 19:48:45.847259  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:48:45.847266  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:48:45.847329  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:48:45.873974  443658 cri.go:89] found id: ""
	I1014 19:48:45.873995  443658 logs.go:282] 0 containers: []
	W1014 19:48:45.874004  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:48:45.874015  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:48:45.874030  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:48:45.932513  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:48:45.925000    8844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:45.925641    8844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:45.927410    8844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:45.927848    8844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:45.929196    8844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:48:45.925000    8844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:45.925641    8844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:45.927410    8844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:45.927848    8844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:45.929196    8844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:48:45.932528  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:48:45.932545  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:48:45.993477  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:48:45.993504  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:48:46.025620  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:48:46.025638  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:48:46.097209  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:48:46.097236  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:48:48.617067  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:48:48.628616  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:48:48.628683  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:48:48.655361  443658 cri.go:89] found id: ""
	I1014 19:48:48.655377  443658 logs.go:282] 0 containers: []
	W1014 19:48:48.655388  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:48:48.655395  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:48:48.655458  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:48:48.681992  443658 cri.go:89] found id: ""
	I1014 19:48:48.682008  443658 logs.go:282] 0 containers: []
	W1014 19:48:48.682015  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:48:48.682020  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:48:48.682065  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:48:48.708630  443658 cri.go:89] found id: ""
	I1014 19:48:48.708647  443658 logs.go:282] 0 containers: []
	W1014 19:48:48.708654  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:48:48.708658  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:48:48.708726  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:48:48.735832  443658 cri.go:89] found id: ""
	I1014 19:48:48.735848  443658 logs.go:282] 0 containers: []
	W1014 19:48:48.735859  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:48:48.735863  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:48:48.735921  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:48:48.763984  443658 cri.go:89] found id: ""
	I1014 19:48:48.763999  443658 logs.go:282] 0 containers: []
	W1014 19:48:48.764017  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:48:48.764022  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:48:48.764074  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:48:48.790052  443658 cri.go:89] found id: ""
	I1014 19:48:48.790072  443658 logs.go:282] 0 containers: []
	W1014 19:48:48.790081  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:48:48.790088  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:48:48.790137  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:48:48.816830  443658 cri.go:89] found id: ""
	I1014 19:48:48.816847  443658 logs.go:282] 0 containers: []
	W1014 19:48:48.816854  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:48:48.816863  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:48:48.816874  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:48:48.885983  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:48:48.886007  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:48:48.904564  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:48:48.904584  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:48:48.963221  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:48:48.955419    8972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:48.956384    8972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:48.957942    8972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:48.958423    8972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:48.960005    8972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:48:48.955419    8972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:48.956384    8972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:48.957942    8972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:48.958423    8972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:48.960005    8972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:48:48.963232  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:48:48.963245  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:48:49.024076  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:48:49.024100  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:48:51.555915  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:48:51.567493  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:48:51.567566  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:48:51.593927  443658 cri.go:89] found id: ""
	I1014 19:48:51.593943  443658 logs.go:282] 0 containers: []
	W1014 19:48:51.593950  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:48:51.593955  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:48:51.594000  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:48:51.622234  443658 cri.go:89] found id: ""
	I1014 19:48:51.622250  443658 logs.go:282] 0 containers: []
	W1014 19:48:51.622257  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:48:51.622261  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:48:51.622306  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:48:51.648637  443658 cri.go:89] found id: ""
	I1014 19:48:51.648654  443658 logs.go:282] 0 containers: []
	W1014 19:48:51.648660  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:48:51.648666  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:48:51.648730  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:48:51.675538  443658 cri.go:89] found id: ""
	I1014 19:48:51.675559  443658 logs.go:282] 0 containers: []
	W1014 19:48:51.675570  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:48:51.675577  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:48:51.675631  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:48:51.701640  443658 cri.go:89] found id: ""
	I1014 19:48:51.701657  443658 logs.go:282] 0 containers: []
	W1014 19:48:51.701664  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:48:51.701670  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:48:51.701730  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:48:51.729739  443658 cri.go:89] found id: ""
	I1014 19:48:51.729770  443658 logs.go:282] 0 containers: []
	W1014 19:48:51.729782  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:48:51.729789  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:48:51.729839  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:48:51.757162  443658 cri.go:89] found id: ""
	I1014 19:48:51.757184  443658 logs.go:282] 0 containers: []
	W1014 19:48:51.757195  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:48:51.757206  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:48:51.757225  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:48:51.825383  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:48:51.825408  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:48:51.843441  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:48:51.843462  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:48:51.901599  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:48:51.893806    9093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:51.894477    9093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:51.896214    9093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:51.896786    9093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:51.898462    9093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:48:51.893806    9093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:51.894477    9093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:51.896214    9093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:51.896786    9093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:51.898462    9093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:48:51.901609  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:48:51.901621  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:48:51.963670  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:48:51.963696  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:48:54.494451  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:48:54.505690  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:48:54.505748  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:48:54.532934  443658 cri.go:89] found id: ""
	I1014 19:48:54.532956  443658 logs.go:282] 0 containers: []
	W1014 19:48:54.532966  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:48:54.532973  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:48:54.533035  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:48:54.560665  443658 cri.go:89] found id: ""
	I1014 19:48:54.560682  443658 logs.go:282] 0 containers: []
	W1014 19:48:54.560689  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:48:54.560693  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:48:54.560746  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:48:54.587851  443658 cri.go:89] found id: ""
	I1014 19:48:54.587871  443658 logs.go:282] 0 containers: []
	W1014 19:48:54.587882  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:48:54.587889  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:48:54.587939  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:48:54.615307  443658 cri.go:89] found id: ""
	I1014 19:48:54.615324  443658 logs.go:282] 0 containers: []
	W1014 19:48:54.615331  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:48:54.615336  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:48:54.615381  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:48:54.642900  443658 cri.go:89] found id: ""
	I1014 19:48:54.642916  443658 logs.go:282] 0 containers: []
	W1014 19:48:54.642922  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:48:54.642928  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:48:54.642987  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:48:54.670686  443658 cri.go:89] found id: ""
	I1014 19:48:54.670702  443658 logs.go:282] 0 containers: []
	W1014 19:48:54.670710  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:48:54.670715  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:48:54.670784  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:48:54.697226  443658 cri.go:89] found id: ""
	I1014 19:48:54.697246  443658 logs.go:282] 0 containers: []
	W1014 19:48:54.697255  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:48:54.697266  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:48:54.697280  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:48:54.759777  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:48:54.759804  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:48:54.790599  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:48:54.790617  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:48:54.864057  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:48:54.864090  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:48:54.882103  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:48:54.882128  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:48:54.942079  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:48:54.934581    9243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:54.935124    9243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:54.936659    9243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:54.937300    9243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:54.938843    9243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:48:54.934581    9243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:54.935124    9243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:54.936659    9243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:54.937300    9243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:54.938843    9243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:48:57.443958  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:48:57.455537  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:48:57.455596  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:48:57.482660  443658 cri.go:89] found id: ""
	I1014 19:48:57.482684  443658 logs.go:282] 0 containers: []
	W1014 19:48:57.482694  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:48:57.482704  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:48:57.482783  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:48:57.510445  443658 cri.go:89] found id: ""
	I1014 19:48:57.510461  443658 logs.go:282] 0 containers: []
	W1014 19:48:57.510467  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:48:57.510471  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:48:57.510523  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:48:57.537439  443658 cri.go:89] found id: ""
	I1014 19:48:57.537456  443658 logs.go:282] 0 containers: []
	W1014 19:48:57.537464  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:48:57.537469  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:48:57.537515  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:48:57.564369  443658 cri.go:89] found id: ""
	I1014 19:48:57.564386  443658 logs.go:282] 0 containers: []
	W1014 19:48:57.564394  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:48:57.564401  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:48:57.564455  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:48:57.591584  443658 cri.go:89] found id: ""
	I1014 19:48:57.591601  443658 logs.go:282] 0 containers: []
	W1014 19:48:57.591607  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:48:57.591612  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:48:57.591657  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:48:57.620996  443658 cri.go:89] found id: ""
	I1014 19:48:57.621016  443658 logs.go:282] 0 containers: []
	W1014 19:48:57.621026  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:48:57.621033  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:48:57.621096  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:48:57.650978  443658 cri.go:89] found id: ""
	I1014 19:48:57.650994  443658 logs.go:282] 0 containers: []
	W1014 19:48:57.651001  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:48:57.651010  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:48:57.651022  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:48:57.709879  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:48:57.701644    9354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:57.702204    9354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:57.704523    9354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:57.705023    9354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:57.706491    9354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:48:57.701644    9354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:57.702204    9354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:57.704523    9354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:57.705023    9354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:57.706491    9354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:48:57.709895  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:48:57.709906  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:48:57.773086  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:48:57.773110  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:48:57.804357  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:48:57.804375  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:48:57.876116  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:48:57.876141  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:49:00.397550  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:49:00.408833  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:49:00.408898  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:49:00.436551  443658 cri.go:89] found id: ""
	I1014 19:49:00.436572  443658 logs.go:282] 0 containers: []
	W1014 19:49:00.436580  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:49:00.436586  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:49:00.436643  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:49:00.463380  443658 cri.go:89] found id: ""
	I1014 19:49:00.463398  443658 logs.go:282] 0 containers: []
	W1014 19:49:00.463406  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:49:00.463411  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:49:00.463464  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:49:00.489936  443658 cri.go:89] found id: ""
	I1014 19:49:00.489953  443658 logs.go:282] 0 containers: []
	W1014 19:49:00.489961  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:49:00.489967  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:49:00.490025  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:49:00.517733  443658 cri.go:89] found id: ""
	I1014 19:49:00.517777  443658 logs.go:282] 0 containers: []
	W1014 19:49:00.517789  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:49:00.517799  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:49:00.517853  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:49:00.545738  443658 cri.go:89] found id: ""
	I1014 19:49:00.545770  443658 logs.go:282] 0 containers: []
	W1014 19:49:00.545782  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:49:00.545789  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:49:00.545847  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:49:00.572980  443658 cri.go:89] found id: ""
	I1014 19:49:00.572998  443658 logs.go:282] 0 containers: []
	W1014 19:49:00.573007  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:49:00.573013  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:49:00.573073  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:49:00.601579  443658 cri.go:89] found id: ""
	I1014 19:49:00.601596  443658 logs.go:282] 0 containers: []
	W1014 19:49:00.601608  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:49:00.601620  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:49:00.601634  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:49:00.664237  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:49:00.664264  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:49:00.696881  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:49:00.696906  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:49:00.769175  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:49:00.769201  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:49:00.787483  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:49:00.787504  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:49:00.845998  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:49:00.838686    9495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:00.839226    9495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:00.840825    9495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:00.841284    9495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:00.842865    9495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:49:00.838686    9495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:00.839226    9495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:00.840825    9495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:00.841284    9495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:00.842865    9495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:49:03.347716  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:49:03.359494  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:49:03.359550  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:49:03.387814  443658 cri.go:89] found id: ""
	I1014 19:49:03.387833  443658 logs.go:282] 0 containers: []
	W1014 19:49:03.387842  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:49:03.387848  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:49:03.387913  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:49:03.416379  443658 cri.go:89] found id: ""
	I1014 19:49:03.416400  443658 logs.go:282] 0 containers: []
	W1014 19:49:03.416410  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:49:03.416415  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:49:03.416466  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:49:03.444338  443658 cri.go:89] found id: ""
	I1014 19:49:03.444355  443658 logs.go:282] 0 containers: []
	W1014 19:49:03.444364  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:49:03.444368  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:49:03.444429  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:49:03.472283  443658 cri.go:89] found id: ""
	I1014 19:49:03.472299  443658 logs.go:282] 0 containers: []
	W1014 19:49:03.472306  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:49:03.472311  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:49:03.472368  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:49:03.499924  443658 cri.go:89] found id: ""
	I1014 19:49:03.499940  443658 logs.go:282] 0 containers: []
	W1014 19:49:03.499947  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:49:03.499951  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:49:03.500014  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:49:03.528675  443658 cri.go:89] found id: ""
	I1014 19:49:03.528691  443658 logs.go:282] 0 containers: []
	W1014 19:49:03.528698  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:49:03.528703  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:49:03.528780  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:49:03.555961  443658 cri.go:89] found id: ""
	I1014 19:49:03.555979  443658 logs.go:282] 0 containers: []
	W1014 19:49:03.555986  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:49:03.555995  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:49:03.556009  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:49:03.615676  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:49:03.608021    9591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:03.608674    9591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:03.610310    9591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:03.610821    9591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:03.612076    9591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:49:03.608021    9591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:03.608674    9591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:03.610310    9591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:03.610821    9591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:03.612076    9591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:49:03.615687  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:49:03.615699  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:49:03.680122  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:49:03.680151  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:49:03.712091  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:49:03.712109  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:49:03.779370  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:49:03.779396  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:49:06.297908  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:49:06.309773  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:49:06.309831  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:49:06.337910  443658 cri.go:89] found id: ""
	I1014 19:49:06.337930  443658 logs.go:282] 0 containers: []
	W1014 19:49:06.337939  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:49:06.337946  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:49:06.337996  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:49:06.366075  443658 cri.go:89] found id: ""
	I1014 19:49:06.366090  443658 logs.go:282] 0 containers: []
	W1014 19:49:06.366097  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:49:06.366102  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:49:06.366149  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:49:06.393203  443658 cri.go:89] found id: ""
	I1014 19:49:06.393219  443658 logs.go:282] 0 containers: []
	W1014 19:49:06.393225  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:49:06.393230  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:49:06.393274  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:49:06.421220  443658 cri.go:89] found id: ""
	I1014 19:49:06.421240  443658 logs.go:282] 0 containers: []
	W1014 19:49:06.421250  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:49:06.421257  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:49:06.421322  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:49:06.449354  443658 cri.go:89] found id: ""
	I1014 19:49:06.449373  443658 logs.go:282] 0 containers: []
	W1014 19:49:06.449382  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:49:06.449388  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:49:06.449450  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:49:06.476432  443658 cri.go:89] found id: ""
	I1014 19:49:06.476450  443658 logs.go:282] 0 containers: []
	W1014 19:49:06.476459  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:49:06.476467  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:49:06.476536  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:49:06.504006  443658 cri.go:89] found id: ""
	I1014 19:49:06.504031  443658 logs.go:282] 0 containers: []
	W1014 19:49:06.504038  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:49:06.504047  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:49:06.504057  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:49:06.533877  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:49:06.533894  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:49:06.600597  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:49:06.600622  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:49:06.619193  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:49:06.619216  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:49:06.680047  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:49:06.672165    9737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:06.672728    9737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:06.674412    9737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:06.675003    9737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:06.676679    9737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:49:06.672165    9737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:06.672728    9737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:06.674412    9737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:06.675003    9737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:06.676679    9737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:49:06.680057  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:49:06.680069  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:49:09.242233  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:49:09.253413  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:49:09.253465  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:49:09.280670  443658 cri.go:89] found id: ""
	I1014 19:49:09.280688  443658 logs.go:282] 0 containers: []
	W1014 19:49:09.280698  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:49:09.280705  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:49:09.280776  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:49:09.307015  443658 cri.go:89] found id: ""
	I1014 19:49:09.307033  443658 logs.go:282] 0 containers: []
	W1014 19:49:09.307043  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:49:09.307049  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:49:09.307104  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:49:09.334276  443658 cri.go:89] found id: ""
	I1014 19:49:09.334296  443658 logs.go:282] 0 containers: []
	W1014 19:49:09.334304  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:49:09.334309  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:49:09.334357  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:49:09.360472  443658 cri.go:89] found id: ""
	I1014 19:49:09.360487  443658 logs.go:282] 0 containers: []
	W1014 19:49:09.360494  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:49:09.360499  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:49:09.360549  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:49:09.388322  443658 cri.go:89] found id: ""
	I1014 19:49:09.388338  443658 logs.go:282] 0 containers: []
	W1014 19:49:09.388345  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:49:09.388349  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:49:09.388396  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:49:09.414924  443658 cri.go:89] found id: ""
	I1014 19:49:09.414944  443658 logs.go:282] 0 containers: []
	W1014 19:49:09.414955  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:49:09.414962  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:49:09.415023  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:49:09.441772  443658 cri.go:89] found id: ""
	I1014 19:49:09.441792  443658 logs.go:282] 0 containers: []
	W1014 19:49:09.441800  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:49:09.441809  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:49:09.441822  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:49:09.509426  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:49:09.509452  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:49:09.527807  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:49:09.527829  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:49:09.587241  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:49:09.579349    9843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:09.579944    9843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:09.582253    9843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:09.582735    9843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:09.583971    9843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:49:09.579349    9843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:09.579944    9843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:09.582253    9843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:09.582735    9843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:09.583971    9843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:49:09.587253  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:49:09.587265  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:49:09.654561  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:49:09.654584  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:49:12.186794  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:49:12.198312  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:49:12.198367  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:49:12.225457  443658 cri.go:89] found id: ""
	I1014 19:49:12.225476  443658 logs.go:282] 0 containers: []
	W1014 19:49:12.225491  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:49:12.225497  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:49:12.225548  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:49:12.253224  443658 cri.go:89] found id: ""
	I1014 19:49:12.253243  443658 logs.go:282] 0 containers: []
	W1014 19:49:12.253251  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:49:12.253256  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:49:12.253317  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:49:12.280591  443658 cri.go:89] found id: ""
	I1014 19:49:12.280610  443658 logs.go:282] 0 containers: []
	W1014 19:49:12.280617  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:49:12.280622  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:49:12.280674  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:49:12.309016  443658 cri.go:89] found id: ""
	I1014 19:49:12.309033  443658 logs.go:282] 0 containers: []
	W1014 19:49:12.309039  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:49:12.309044  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:49:12.309091  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:49:12.337230  443658 cri.go:89] found id: ""
	I1014 19:49:12.337251  443658 logs.go:282] 0 containers: []
	W1014 19:49:12.337260  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:49:12.337267  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:49:12.337336  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:49:12.364682  443658 cri.go:89] found id: ""
	I1014 19:49:12.364728  443658 logs.go:282] 0 containers: []
	W1014 19:49:12.364737  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:49:12.364743  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:49:12.364821  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:49:12.392936  443658 cri.go:89] found id: ""
	I1014 19:49:12.392960  443658 logs.go:282] 0 containers: []
	W1014 19:49:12.392967  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:49:12.392976  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:49:12.392986  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:49:12.452595  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:49:12.444355    9971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:12.444853    9971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:12.446438    9971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:12.447015    9971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:12.449368    9971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:49:12.444355    9971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:12.444853    9971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:12.446438    9971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:12.447015    9971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:12.449368    9971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:49:12.452608  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:49:12.452621  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:49:12.516437  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:49:12.516463  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:49:12.547372  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:49:12.547391  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:49:12.614937  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:49:12.614961  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:49:15.134260  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:49:15.146546  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:49:15.146600  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:49:15.174510  443658 cri.go:89] found id: ""
	I1014 19:49:15.174526  443658 logs.go:282] 0 containers: []
	W1014 19:49:15.174533  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:49:15.174538  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:49:15.174585  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:49:15.202132  443658 cri.go:89] found id: ""
	I1014 19:49:15.202152  443658 logs.go:282] 0 containers: []
	W1014 19:49:15.202162  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:49:15.202169  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:49:15.202226  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:49:15.230616  443658 cri.go:89] found id: ""
	I1014 19:49:15.230633  443658 logs.go:282] 0 containers: []
	W1014 19:49:15.230639  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:49:15.230644  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:49:15.230696  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:49:15.258236  443658 cri.go:89] found id: ""
	I1014 19:49:15.258253  443658 logs.go:282] 0 containers: []
	W1014 19:49:15.258263  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:49:15.258267  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:49:15.258326  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:49:15.286042  443658 cri.go:89] found id: ""
	I1014 19:49:15.286059  443658 logs.go:282] 0 containers: []
	W1014 19:49:15.286066  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:49:15.286072  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:49:15.286134  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:49:15.314815  443658 cri.go:89] found id: ""
	I1014 19:49:15.314833  443658 logs.go:282] 0 containers: []
	W1014 19:49:15.314840  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:49:15.314844  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:49:15.314897  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:49:15.341953  443658 cri.go:89] found id: ""
	I1014 19:49:15.341969  443658 logs.go:282] 0 containers: []
	W1014 19:49:15.341976  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:49:15.341984  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:49:15.341995  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:49:15.412363  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:49:15.412387  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:49:15.430737  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:49:15.430770  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:49:15.492263  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:49:15.483535   10099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:15.484124   10099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:15.485892   10099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:15.486398   10099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:15.489083   10099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:49:15.483535   10099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:15.484124   10099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:15.485892   10099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:15.486398   10099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:15.489083   10099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:49:15.492274  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:49:15.492286  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:49:15.556874  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:49:15.556899  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:49:18.089267  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:49:18.101164  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:49:18.101225  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:49:18.130411  443658 cri.go:89] found id: ""
	I1014 19:49:18.130428  443658 logs.go:282] 0 containers: []
	W1014 19:49:18.130435  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:49:18.130440  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:49:18.130500  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:49:18.157908  443658 cri.go:89] found id: ""
	I1014 19:49:18.157927  443658 logs.go:282] 0 containers: []
	W1014 19:49:18.157938  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:49:18.157943  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:49:18.157997  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:49:18.185537  443658 cri.go:89] found id: ""
	I1014 19:49:18.185560  443658 logs.go:282] 0 containers: []
	W1014 19:49:18.185568  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:49:18.185573  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:49:18.185627  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:49:18.212466  443658 cri.go:89] found id: ""
	I1014 19:49:18.212485  443658 logs.go:282] 0 containers: []
	W1014 19:49:18.212493  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:49:18.212498  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:49:18.212561  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:49:18.239975  443658 cri.go:89] found id: ""
	I1014 19:49:18.239993  443658 logs.go:282] 0 containers: []
	W1014 19:49:18.240000  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:49:18.240005  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:49:18.240056  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:49:18.267082  443658 cri.go:89] found id: ""
	I1014 19:49:18.267101  443658 logs.go:282] 0 containers: []
	W1014 19:49:18.267109  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:49:18.267114  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:49:18.267163  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:49:18.293654  443658 cri.go:89] found id: ""
	I1014 19:49:18.293672  443658 logs.go:282] 0 containers: []
	W1014 19:49:18.293679  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:49:18.293689  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:49:18.293700  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:49:18.363853  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:49:18.363878  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:49:18.383522  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:49:18.383545  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:49:18.442304  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:49:18.435285   10216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:18.435849   10216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:18.437451   10216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:18.437904   10216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:18.438994   10216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:49:18.435285   10216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:18.435849   10216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:18.437451   10216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:18.437904   10216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:18.438994   10216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:49:18.442316  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:49:18.442327  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:49:18.503728  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:49:18.503752  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:49:21.035160  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:49:21.046500  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:49:21.046556  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:49:21.073686  443658 cri.go:89] found id: ""
	I1014 19:49:21.073705  443658 logs.go:282] 0 containers: []
	W1014 19:49:21.073716  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:49:21.073723  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:49:21.073790  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:49:21.100037  443658 cri.go:89] found id: ""
	I1014 19:49:21.100052  443658 logs.go:282] 0 containers: []
	W1014 19:49:21.100059  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:49:21.100064  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:49:21.100107  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:49:21.127167  443658 cri.go:89] found id: ""
	I1014 19:49:21.127183  443658 logs.go:282] 0 containers: []
	W1014 19:49:21.127190  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:49:21.127195  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:49:21.127243  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:49:21.155028  443658 cri.go:89] found id: ""
	I1014 19:49:21.155045  443658 logs.go:282] 0 containers: []
	W1014 19:49:21.155052  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:49:21.155056  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:49:21.155104  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:49:21.182898  443658 cri.go:89] found id: ""
	I1014 19:49:21.182919  443658 logs.go:282] 0 containers: []
	W1014 19:49:21.182926  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:49:21.182931  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:49:21.182981  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:49:21.214304  443658 cri.go:89] found id: ""
	I1014 19:49:21.214321  443658 logs.go:282] 0 containers: []
	W1014 19:49:21.214327  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:49:21.214332  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:49:21.214377  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:49:21.242021  443658 cri.go:89] found id: ""
	I1014 19:49:21.242038  443658 logs.go:282] 0 containers: []
	W1014 19:49:21.242045  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:49:21.242053  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:49:21.242065  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:49:21.259561  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:49:21.259582  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:49:21.319723  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:49:21.312041   10338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:21.312668   10338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:21.314370   10338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:21.314958   10338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:21.316607   10338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:49:21.312041   10338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:21.312668   10338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:21.314370   10338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:21.314958   10338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:21.316607   10338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:49:21.319734  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:49:21.319745  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:49:21.380339  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:49:21.380373  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:49:21.410561  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:49:21.410580  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:49:23.982170  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:49:23.993512  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:49:23.993566  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:49:24.021666  443658 cri.go:89] found id: ""
	I1014 19:49:24.021681  443658 logs.go:282] 0 containers: []
	W1014 19:49:24.021688  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:49:24.021693  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:49:24.021777  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:49:24.048763  443658 cri.go:89] found id: ""
	I1014 19:49:24.048788  443658 logs.go:282] 0 containers: []
	W1014 19:49:24.048799  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:49:24.048806  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:49:24.048868  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:49:24.076823  443658 cri.go:89] found id: ""
	I1014 19:49:24.076845  443658 logs.go:282] 0 containers: []
	W1014 19:49:24.076856  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:49:24.076862  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:49:24.076920  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:49:24.104097  443658 cri.go:89] found id: ""
	I1014 19:49:24.104117  443658 logs.go:282] 0 containers: []
	W1014 19:49:24.104126  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:49:24.104130  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:49:24.104182  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:49:24.130667  443658 cri.go:89] found id: ""
	I1014 19:49:24.130682  443658 logs.go:282] 0 containers: []
	W1014 19:49:24.130691  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:49:24.130696  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:49:24.130747  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:49:24.158412  443658 cri.go:89] found id: ""
	I1014 19:49:24.158429  443658 logs.go:282] 0 containers: []
	W1014 19:49:24.158437  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:49:24.158442  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:49:24.158491  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:49:24.185765  443658 cri.go:89] found id: ""
	I1014 19:49:24.185785  443658 logs.go:282] 0 containers: []
	W1014 19:49:24.185793  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:49:24.185801  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:49:24.185813  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:49:24.244433  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:49:24.236694   10471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:24.237287   10471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:24.238941   10471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:24.239414   10471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:24.240968   10471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:49:24.236694   10471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:24.237287   10471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:24.238941   10471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:24.239414   10471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:24.240968   10471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:49:24.244454  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:49:24.244469  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:49:24.307235  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:49:24.307260  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:49:24.337358  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:49:24.337379  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:49:24.406396  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:49:24.406421  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:49:26.925678  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:49:26.936862  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:49:26.936911  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:49:26.963233  443658 cri.go:89] found id: ""
	I1014 19:49:26.963249  443658 logs.go:282] 0 containers: []
	W1014 19:49:26.963256  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:49:26.963261  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:49:26.963318  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:49:26.989526  443658 cri.go:89] found id: ""
	I1014 19:49:26.989545  443658 logs.go:282] 0 containers: []
	W1014 19:49:26.989553  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:49:26.989558  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:49:26.989606  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:49:27.016445  443658 cri.go:89] found id: ""
	I1014 19:49:27.016461  443658 logs.go:282] 0 containers: []
	W1014 19:49:27.016468  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:49:27.016473  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:49:27.016536  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:49:27.044936  443658 cri.go:89] found id: ""
	I1014 19:49:27.044954  443658 logs.go:282] 0 containers: []
	W1014 19:49:27.044961  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:49:27.044965  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:49:27.045023  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:49:27.071859  443658 cri.go:89] found id: ""
	I1014 19:49:27.071881  443658 logs.go:282] 0 containers: []
	W1014 19:49:27.071891  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:49:27.071898  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:49:27.071964  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:49:27.101404  443658 cri.go:89] found id: ""
	I1014 19:49:27.101421  443658 logs.go:282] 0 containers: []
	W1014 19:49:27.101431  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:49:27.101439  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:49:27.101492  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:49:27.130140  443658 cri.go:89] found id: ""
	I1014 19:49:27.130158  443658 logs.go:282] 0 containers: []
	W1014 19:49:27.130168  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:49:27.130178  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:49:27.130192  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:49:27.191223  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:49:27.183739   10583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:27.184372   10583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:27.185983   10583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:27.186439   10583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:27.188034   10583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:49:27.183739   10583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:27.184372   10583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:27.185983   10583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:27.186439   10583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:27.188034   10583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:49:27.191237  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:49:27.191249  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:49:27.255430  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:49:27.255456  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:49:27.285702  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:49:27.285740  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:49:27.352209  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:49:27.352234  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:49:29.872354  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:49:29.883680  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:49:29.883735  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:49:29.911601  443658 cri.go:89] found id: ""
	I1014 19:49:29.911621  443658 logs.go:282] 0 containers: []
	W1014 19:49:29.911628  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:49:29.911634  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:49:29.911681  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:49:29.940396  443658 cri.go:89] found id: ""
	I1014 19:49:29.940412  443658 logs.go:282] 0 containers: []
	W1014 19:49:29.940419  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:49:29.940424  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:49:29.940471  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:49:29.969195  443658 cri.go:89] found id: ""
	I1014 19:49:29.969213  443658 logs.go:282] 0 containers: []
	W1014 19:49:29.969220  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:49:29.969225  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:49:29.969275  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:49:29.997694  443658 cri.go:89] found id: ""
	I1014 19:49:29.997715  443658 logs.go:282] 0 containers: []
	W1014 19:49:29.997725  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:49:29.997732  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:49:29.997818  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:49:30.027488  443658 cri.go:89] found id: ""
	I1014 19:49:30.027506  443658 logs.go:282] 0 containers: []
	W1014 19:49:30.027514  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:49:30.027518  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:49:30.027568  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:49:30.054599  443658 cri.go:89] found id: ""
	I1014 19:49:30.054617  443658 logs.go:282] 0 containers: []
	W1014 19:49:30.054625  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:49:30.054630  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:49:30.054709  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:49:30.081817  443658 cri.go:89] found id: ""
	I1014 19:49:30.081833  443658 logs.go:282] 0 containers: []
	W1014 19:49:30.081843  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:49:30.081854  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:49:30.081870  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:49:30.145428  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:49:30.145454  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:49:30.177045  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:49:30.177064  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:49:30.244236  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:49:30.244263  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:49:30.262247  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:49:30.262268  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:49:30.320401  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:49:30.313011   10727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:30.313520   10727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:30.315086   10727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:30.315515   10727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:30.317170   10727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:49:30.313011   10727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:30.313520   10727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:30.315086   10727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:30.315515   10727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:30.317170   10727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:49:32.822227  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:49:32.833616  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:49:32.833715  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:49:32.861467  443658 cri.go:89] found id: ""
	I1014 19:49:32.861484  443658 logs.go:282] 0 containers: []
	W1014 19:49:32.861493  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:49:32.861499  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:49:32.861567  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:49:32.889541  443658 cri.go:89] found id: ""
	I1014 19:49:32.889559  443658 logs.go:282] 0 containers: []
	W1014 19:49:32.889566  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:49:32.889571  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:49:32.889616  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:49:32.915877  443658 cri.go:89] found id: ""
	I1014 19:49:32.915896  443658 logs.go:282] 0 containers: []
	W1014 19:49:32.915904  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:49:32.915908  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:49:32.915969  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:49:32.943538  443658 cri.go:89] found id: ""
	I1014 19:49:32.943558  443658 logs.go:282] 0 containers: []
	W1014 19:49:32.943568  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:49:32.943573  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:49:32.943635  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:49:32.969493  443658 cri.go:89] found id: ""
	I1014 19:49:32.969511  443658 logs.go:282] 0 containers: []
	W1014 19:49:32.969518  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:49:32.969523  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:49:32.969581  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:49:32.995650  443658 cri.go:89] found id: ""
	I1014 19:49:32.995671  443658 logs.go:282] 0 containers: []
	W1014 19:49:32.995679  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:49:32.995684  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:49:32.995765  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:49:33.023836  443658 cri.go:89] found id: ""
	I1014 19:49:33.023856  443658 logs.go:282] 0 containers: []
	W1014 19:49:33.023866  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:49:33.023876  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:49:33.023889  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:49:33.054135  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:49:33.054157  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:49:33.120594  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:49:33.120618  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:49:33.138783  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:49:33.138803  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:49:33.197459  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:49:33.189973   10842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:33.190463   10842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:33.192089   10842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:33.192508   10842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:33.194210   10842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:49:33.189973   10842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:33.190463   10842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:33.192089   10842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:33.192508   10842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:33.194210   10842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:49:33.197473  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:49:33.197483  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:49:35.763533  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:49:35.775555  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:49:35.775604  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:49:35.802773  443658 cri.go:89] found id: ""
	I1014 19:49:35.802794  443658 logs.go:282] 0 containers: []
	W1014 19:49:35.802800  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:49:35.802805  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:49:35.802853  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:49:35.830466  443658 cri.go:89] found id: ""
	I1014 19:49:35.830481  443658 logs.go:282] 0 containers: []
	W1014 19:49:35.830488  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:49:35.830499  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:49:35.830545  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:49:35.857322  443658 cri.go:89] found id: ""
	I1014 19:49:35.857342  443658 logs.go:282] 0 containers: []
	W1014 19:49:35.857350  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:49:35.857354  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:49:35.857407  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:49:35.884681  443658 cri.go:89] found id: ""
	I1014 19:49:35.884705  443658 logs.go:282] 0 containers: []
	W1014 19:49:35.884711  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:49:35.884717  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:49:35.884785  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:49:35.913187  443658 cri.go:89] found id: ""
	I1014 19:49:35.913205  443658 logs.go:282] 0 containers: []
	W1014 19:49:35.913212  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:49:35.913219  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:49:35.913284  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:49:35.941275  443658 cri.go:89] found id: ""
	I1014 19:49:35.941296  443658 logs.go:282] 0 containers: []
	W1014 19:49:35.941306  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:49:35.941312  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:49:35.941404  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:49:35.968221  443658 cri.go:89] found id: ""
	I1014 19:49:35.968242  443658 logs.go:282] 0 containers: []
	W1014 19:49:35.968249  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:49:35.968258  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:49:35.968269  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:49:35.997909  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:49:35.997926  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:49:36.065160  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:49:36.065186  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:49:36.084069  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:49:36.084094  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:49:36.143710  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:49:36.136552   10974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:36.137091   10974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:36.138749   10974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:36.139231   10974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:36.140429   10974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:49:36.136552   10974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:36.137091   10974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:36.138749   10974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:36.139231   10974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:36.140429   10974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:49:36.143728  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:49:36.143743  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:49:38.705714  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:49:38.717101  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:49:38.717153  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:49:38.743695  443658 cri.go:89] found id: ""
	I1014 19:49:38.743711  443658 logs.go:282] 0 containers: []
	W1014 19:49:38.743720  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:49:38.743725  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:49:38.743801  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:49:38.771046  443658 cri.go:89] found id: ""
	I1014 19:49:38.771062  443658 logs.go:282] 0 containers: []
	W1014 19:49:38.771069  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:49:38.771074  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:49:38.771120  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:49:38.798553  443658 cri.go:89] found id: ""
	I1014 19:49:38.798569  443658 logs.go:282] 0 containers: []
	W1014 19:49:38.798579  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:49:38.798585  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:49:38.798651  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:49:38.825740  443658 cri.go:89] found id: ""
	I1014 19:49:38.825773  443658 logs.go:282] 0 containers: []
	W1014 19:49:38.825784  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:49:38.825790  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:49:38.825842  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:49:38.852044  443658 cri.go:89] found id: ""
	I1014 19:49:38.852063  443658 logs.go:282] 0 containers: []
	W1014 19:49:38.852074  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:49:38.852081  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:49:38.852138  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:49:38.879494  443658 cri.go:89] found id: ""
	I1014 19:49:38.879511  443658 logs.go:282] 0 containers: []
	W1014 19:49:38.879519  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:49:38.879524  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:49:38.879572  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:49:38.908560  443658 cri.go:89] found id: ""
	I1014 19:49:38.908579  443658 logs.go:282] 0 containers: []
	W1014 19:49:38.908587  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:49:38.908597  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:49:38.908608  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:49:38.967381  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:49:38.960253   11084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:38.960835   11084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:38.962461   11084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:38.962872   11084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:38.964250   11084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:49:38.960253   11084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:38.960835   11084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:38.962461   11084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:38.962872   11084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:38.964250   11084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:49:38.967392  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:49:38.967407  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:49:39.029751  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:49:39.029782  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:49:39.060387  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:49:39.060407  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:49:39.131578  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:49:39.131603  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:49:41.650879  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:49:41.662649  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:49:41.662714  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:49:41.690616  443658 cri.go:89] found id: ""
	I1014 19:49:41.690632  443658 logs.go:282] 0 containers: []
	W1014 19:49:41.690639  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:49:41.690644  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:49:41.690726  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:49:41.717290  443658 cri.go:89] found id: ""
	I1014 19:49:41.717307  443658 logs.go:282] 0 containers: []
	W1014 19:49:41.717315  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:49:41.717319  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:49:41.717370  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:49:41.744219  443658 cri.go:89] found id: ""
	I1014 19:49:41.744235  443658 logs.go:282] 0 containers: []
	W1014 19:49:41.744242  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:49:41.744247  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:49:41.744291  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:49:41.771856  443658 cri.go:89] found id: ""
	I1014 19:49:41.771874  443658 logs.go:282] 0 containers: []
	W1014 19:49:41.771881  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:49:41.771886  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:49:41.771933  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:49:41.798980  443658 cri.go:89] found id: ""
	I1014 19:49:41.798997  443658 logs.go:282] 0 containers: []
	W1014 19:49:41.799008  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:49:41.799014  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:49:41.799082  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:49:41.824815  443658 cri.go:89] found id: ""
	I1014 19:49:41.824833  443658 logs.go:282] 0 containers: []
	W1014 19:49:41.824841  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:49:41.824847  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:49:41.824910  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:49:41.853352  443658 cri.go:89] found id: ""
	I1014 19:49:41.853369  443658 logs.go:282] 0 containers: []
	W1014 19:49:41.853377  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:49:41.853385  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:49:41.853397  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:49:41.871201  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:49:41.871221  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:49:41.931818  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:49:41.924117   11199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:41.924656   11199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:41.926161   11199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:41.926706   11199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:41.928205   11199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:49:41.924117   11199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:41.924656   11199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:41.926161   11199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:41.926706   11199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:41.928205   11199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:49:41.931829  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:49:41.931839  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:49:41.997739  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:49:41.997769  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:49:42.030107  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:49:42.030126  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:49:44.596638  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:49:44.608335  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:49:44.608403  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:49:44.636505  443658 cri.go:89] found id: ""
	I1014 19:49:44.636523  443658 logs.go:282] 0 containers: []
	W1014 19:49:44.636530  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:49:44.636535  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:49:44.636592  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:49:44.663068  443658 cri.go:89] found id: ""
	I1014 19:49:44.663085  443658 logs.go:282] 0 containers: []
	W1014 19:49:44.663091  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:49:44.663097  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:49:44.663156  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:49:44.691243  443658 cri.go:89] found id: ""
	I1014 19:49:44.691259  443658 logs.go:282] 0 containers: []
	W1014 19:49:44.691265  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:49:44.691270  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:49:44.691329  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:49:44.718866  443658 cri.go:89] found id: ""
	I1014 19:49:44.718889  443658 logs.go:282] 0 containers: []
	W1014 19:49:44.718900  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:49:44.718907  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:49:44.718964  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:49:44.746897  443658 cri.go:89] found id: ""
	I1014 19:49:44.746918  443658 logs.go:282] 0 containers: []
	W1014 19:49:44.746926  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:49:44.746930  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:49:44.746982  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:49:44.775031  443658 cri.go:89] found id: ""
	I1014 19:49:44.775049  443658 logs.go:282] 0 containers: []
	W1014 19:49:44.775058  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:49:44.775065  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:49:44.775134  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:49:44.803293  443658 cri.go:89] found id: ""
	I1014 19:49:44.803309  443658 logs.go:282] 0 containers: []
	W1014 19:49:44.803317  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:49:44.803326  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:49:44.803340  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:49:44.875474  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:49:44.875500  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:49:44.894197  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:49:44.894221  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:49:44.953777  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:49:44.946510   11323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:44.947021   11323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:44.948628   11323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:44.949193   11323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:44.950677   11323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:49:44.946510   11323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:44.947021   11323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:44.948628   11323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:44.949193   11323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:44.950677   11323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:49:44.953793  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:49:44.953807  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:49:45.014704  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:49:45.014730  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:49:47.548453  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:49:47.559665  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:49:47.559718  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:49:47.585634  443658 cri.go:89] found id: ""
	I1014 19:49:47.585654  443658 logs.go:282] 0 containers: []
	W1014 19:49:47.585664  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:49:47.585671  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:49:47.585770  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:49:47.613859  443658 cri.go:89] found id: ""
	I1014 19:49:47.613878  443658 logs.go:282] 0 containers: []
	W1014 19:49:47.613888  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:49:47.613894  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:49:47.613973  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:49:47.644468  443658 cri.go:89] found id: ""
	I1014 19:49:47.644489  443658 logs.go:282] 0 containers: []
	W1014 19:49:47.644498  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:49:47.644504  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:49:47.644577  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:49:47.673671  443658 cri.go:89] found id: ""
	I1014 19:49:47.673689  443658 logs.go:282] 0 containers: []
	W1014 19:49:47.673700  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:49:47.673708  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:49:47.673794  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:49:47.702597  443658 cri.go:89] found id: ""
	I1014 19:49:47.702613  443658 logs.go:282] 0 containers: []
	W1014 19:49:47.702621  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:49:47.702626  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:49:47.702687  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:49:47.729519  443658 cri.go:89] found id: ""
	I1014 19:49:47.729535  443658 logs.go:282] 0 containers: []
	W1014 19:49:47.729542  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:49:47.729546  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:49:47.729594  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:49:47.757807  443658 cri.go:89] found id: ""
	I1014 19:49:47.757824  443658 logs.go:282] 0 containers: []
	W1014 19:49:47.757831  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:49:47.757839  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:49:47.757853  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:49:47.829770  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:49:47.829807  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:49:47.848287  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:49:47.848311  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:49:47.906512  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:49:47.898946   11456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:47.899539   11456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:47.901229   11456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:47.901705   11456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:47.903277   11456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:49:47.898946   11456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:47.899539   11456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:47.901229   11456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:47.901705   11456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:47.903277   11456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:49:47.906525  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:49:47.906537  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:49:47.971102  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:49:47.971128  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:49:50.502817  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:49:50.514425  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:49:50.514473  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:49:50.541600  443658 cri.go:89] found id: ""
	I1014 19:49:50.541620  443658 logs.go:282] 0 containers: []
	W1014 19:49:50.541631  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:49:50.541637  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:49:50.541689  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:49:50.569005  443658 cri.go:89] found id: ""
	I1014 19:49:50.569032  443658 logs.go:282] 0 containers: []
	W1014 19:49:50.569041  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:49:50.569049  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:49:50.569121  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:49:50.597051  443658 cri.go:89] found id: ""
	I1014 19:49:50.597068  443658 logs.go:282] 0 containers: []
	W1014 19:49:50.597075  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:49:50.597079  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:49:50.597137  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:49:50.626382  443658 cri.go:89] found id: ""
	I1014 19:49:50.626405  443658 logs.go:282] 0 containers: []
	W1014 19:49:50.626412  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:49:50.626419  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:49:50.626473  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:49:50.654979  443658 cri.go:89] found id: ""
	I1014 19:49:50.654996  443658 logs.go:282] 0 containers: []
	W1014 19:49:50.655004  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:49:50.655008  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:49:50.655078  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:49:50.683528  443658 cri.go:89] found id: ""
	I1014 19:49:50.683548  443658 logs.go:282] 0 containers: []
	W1014 19:49:50.683558  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:49:50.683565  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:49:50.683618  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:49:50.711499  443658 cri.go:89] found id: ""
	I1014 19:49:50.711517  443658 logs.go:282] 0 containers: []
	W1014 19:49:50.711527  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:49:50.711537  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:49:50.711549  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:49:50.778199  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:49:50.778225  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:49:50.796226  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:49:50.796248  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:49:50.854616  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:49:50.846701   11581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:50.848209   11581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:50.848680   11581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:50.850246   11581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:50.850635   11581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:49:50.846701   11581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:50.848209   11581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:50.848680   11581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:50.850246   11581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:50.850635   11581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:49:50.854631  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:49:50.854643  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:49:50.918886  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:49:50.918914  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:49:53.451878  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:49:53.463151  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:49:53.463203  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:49:53.489474  443658 cri.go:89] found id: ""
	I1014 19:49:53.489490  443658 logs.go:282] 0 containers: []
	W1014 19:49:53.489499  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:49:53.489506  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:49:53.489568  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:49:53.516620  443658 cri.go:89] found id: ""
	I1014 19:49:53.516638  443658 logs.go:282] 0 containers: []
	W1014 19:49:53.516649  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:49:53.516656  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:49:53.516712  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:49:53.543251  443658 cri.go:89] found id: ""
	I1014 19:49:53.543270  443658 logs.go:282] 0 containers: []
	W1014 19:49:53.543281  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:49:53.543287  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:49:53.543354  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:49:53.570736  443658 cri.go:89] found id: ""
	I1014 19:49:53.570769  443658 logs.go:282] 0 containers: []
	W1014 19:49:53.570779  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:49:53.570786  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:49:53.570840  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:49:53.598355  443658 cri.go:89] found id: ""
	I1014 19:49:53.598372  443658 logs.go:282] 0 containers: []
	W1014 19:49:53.598381  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:49:53.598387  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:49:53.598450  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:49:53.625505  443658 cri.go:89] found id: ""
	I1014 19:49:53.625524  443658 logs.go:282] 0 containers: []
	W1014 19:49:53.625535  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:49:53.625542  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:49:53.625592  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:49:53.654789  443658 cri.go:89] found id: ""
	I1014 19:49:53.654808  443658 logs.go:282] 0 containers: []
	W1014 19:49:53.654815  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:49:53.654823  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:49:53.654839  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:49:53.726281  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:49:53.726306  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:49:53.744456  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:49:53.744480  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:49:53.804344  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:49:53.796970   11702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:53.797615   11702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:53.799272   11702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:53.799836   11702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:53.800930   11702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:49:53.796970   11702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:53.797615   11702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:53.799272   11702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:53.799836   11702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:53.800930   11702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:49:53.804365  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:49:53.804378  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:49:53.864148  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:49:53.864174  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:49:56.397395  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:49:56.408940  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:49:56.408994  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:49:56.436261  443658 cri.go:89] found id: ""
	I1014 19:49:56.436277  443658 logs.go:282] 0 containers: []
	W1014 19:49:56.436284  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:49:56.436291  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:49:56.436343  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:49:56.464497  443658 cri.go:89] found id: ""
	I1014 19:49:56.464514  443658 logs.go:282] 0 containers: []
	W1014 19:49:56.464523  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:49:56.464529  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:49:56.464584  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:49:56.492551  443658 cri.go:89] found id: ""
	I1014 19:49:56.492573  443658 logs.go:282] 0 containers: []
	W1014 19:49:56.492580  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:49:56.492585  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:49:56.492634  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:49:56.519631  443658 cri.go:89] found id: ""
	I1014 19:49:56.519650  443658 logs.go:282] 0 containers: []
	W1014 19:49:56.519661  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:49:56.519667  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:49:56.519716  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:49:56.545245  443658 cri.go:89] found id: ""
	I1014 19:49:56.545262  443658 logs.go:282] 0 containers: []
	W1014 19:49:56.545269  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:49:56.545274  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:49:56.545322  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:49:56.572677  443658 cri.go:89] found id: ""
	I1014 19:49:56.572700  443658 logs.go:282] 0 containers: []
	W1014 19:49:56.572711  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:49:56.572718  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:49:56.572795  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:49:56.601136  443658 cri.go:89] found id: ""
	I1014 19:49:56.601156  443658 logs.go:282] 0 containers: []
	W1014 19:49:56.601167  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:49:56.601178  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:49:56.601192  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:49:56.666034  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:49:56.666060  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:49:56.698200  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:49:56.698222  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:49:56.767958  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:49:56.767983  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:49:56.786835  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:49:56.786860  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:49:56.845436  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:49:56.837911   11844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:56.838400   11844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:56.840026   11844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:56.840573   11844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:56.842214   11844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:49:56.837911   11844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:56.838400   11844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:56.840026   11844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:56.840573   11844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:56.842214   11844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:49:59.347179  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:49:59.358660  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:49:59.358711  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:49:59.387000  443658 cri.go:89] found id: ""
	I1014 19:49:59.387027  443658 logs.go:282] 0 containers: []
	W1014 19:49:59.387034  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:49:59.387040  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:49:59.387088  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:49:59.414823  443658 cri.go:89] found id: ""
	I1014 19:49:59.414840  443658 logs.go:282] 0 containers: []
	W1014 19:49:59.414847  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:49:59.414852  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:49:59.414912  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:49:59.442607  443658 cri.go:89] found id: ""
	I1014 19:49:59.442624  443658 logs.go:282] 0 containers: []
	W1014 19:49:59.442631  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:49:59.442636  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:49:59.442696  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:49:59.471821  443658 cri.go:89] found id: ""
	I1014 19:49:59.471846  443658 logs.go:282] 0 containers: []
	W1014 19:49:59.471856  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:49:59.471864  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:49:59.471937  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:49:59.498236  443658 cri.go:89] found id: ""
	I1014 19:49:59.498256  443658 logs.go:282] 0 containers: []
	W1014 19:49:59.498263  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:49:59.498268  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:49:59.498316  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:49:59.525020  443658 cri.go:89] found id: ""
	I1014 19:49:59.525039  443658 logs.go:282] 0 containers: []
	W1014 19:49:59.525046  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:49:59.525051  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:49:59.525101  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:49:59.551137  443658 cri.go:89] found id: ""
	I1014 19:49:59.551157  443658 logs.go:282] 0 containers: []
	W1014 19:49:59.551167  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:49:59.551180  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:49:59.551192  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:49:59.622834  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:49:59.622862  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:49:59.641369  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:49:59.641392  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:49:59.701545  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:49:59.694218   11954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:59.694838   11954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:59.696377   11954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:59.696859   11954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:59.698400   11954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:49:59.694218   11954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:59.694838   11954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:59.696377   11954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:59.696859   11954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:59.698400   11954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:49:59.701565  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:49:59.701623  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:49:59.765745  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:49:59.765773  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:50:02.298114  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:50:02.309805  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:50:02.309861  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:50:02.337973  443658 cri.go:89] found id: ""
	I1014 19:50:02.337989  443658 logs.go:282] 0 containers: []
	W1014 19:50:02.337996  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:50:02.338001  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:50:02.338069  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:50:02.366907  443658 cri.go:89] found id: ""
	I1014 19:50:02.366925  443658 logs.go:282] 0 containers: []
	W1014 19:50:02.366933  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:50:02.366938  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:50:02.366996  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:50:02.394409  443658 cri.go:89] found id: ""
	I1014 19:50:02.394427  443658 logs.go:282] 0 containers: []
	W1014 19:50:02.394437  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:50:02.394445  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:50:02.394507  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:50:02.423803  443658 cri.go:89] found id: ""
	I1014 19:50:02.423825  443658 logs.go:282] 0 containers: []
	W1014 19:50:02.423835  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:50:02.423841  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:50:02.423894  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:50:02.453316  443658 cri.go:89] found id: ""
	I1014 19:50:02.453346  443658 logs.go:282] 0 containers: []
	W1014 19:50:02.453357  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:50:02.453363  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:50:02.453429  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:50:02.480872  443658 cri.go:89] found id: ""
	I1014 19:50:02.480901  443658 logs.go:282] 0 containers: []
	W1014 19:50:02.480911  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:50:02.480917  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:50:02.480981  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:50:02.508491  443658 cri.go:89] found id: ""
	I1014 19:50:02.508513  443658 logs.go:282] 0 containers: []
	W1014 19:50:02.508520  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:50:02.508530  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:50:02.508545  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:50:02.538904  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:50:02.538926  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:50:02.604250  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:50:02.604276  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:50:02.624221  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:50:02.624244  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:50:02.686637  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:50:02.678751   12105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:02.679376   12105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:02.681040   12105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:02.681562   12105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:02.683182   12105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:50:02.678751   12105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:02.679376   12105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:02.681040   12105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:02.681562   12105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:02.683182   12105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:50:02.686653  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:50:02.686670  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:50:05.248160  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:50:05.259486  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:50:05.259543  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:50:05.287245  443658 cri.go:89] found id: ""
	I1014 19:50:05.287266  443658 logs.go:282] 0 containers: []
	W1014 19:50:05.287277  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:50:05.287283  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:50:05.287337  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:50:05.316262  443658 cri.go:89] found id: ""
	I1014 19:50:05.316281  443658 logs.go:282] 0 containers: []
	W1014 19:50:05.316292  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:50:05.316298  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:50:05.316357  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:50:05.345733  443658 cri.go:89] found id: ""
	I1014 19:50:05.345767  443658 logs.go:282] 0 containers: []
	W1014 19:50:05.345779  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:50:05.345786  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:50:05.345842  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:50:05.373802  443658 cri.go:89] found id: ""
	I1014 19:50:05.373821  443658 logs.go:282] 0 containers: []
	W1014 19:50:05.373832  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:50:05.373840  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:50:05.373907  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:50:05.401831  443658 cri.go:89] found id: ""
	I1014 19:50:05.401849  443658 logs.go:282] 0 containers: []
	W1014 19:50:05.401856  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:50:05.401861  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:50:05.401915  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:50:05.430126  443658 cri.go:89] found id: ""
	I1014 19:50:05.430148  443658 logs.go:282] 0 containers: []
	W1014 19:50:05.430160  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:50:05.430167  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:50:05.430238  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:50:05.459121  443658 cri.go:89] found id: ""
	I1014 19:50:05.459139  443658 logs.go:282] 0 containers: []
	W1014 19:50:05.459146  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:50:05.459154  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:50:05.459166  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:50:05.519744  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:50:05.512669   12206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:05.513219   12206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:05.514764   12206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:05.515265   12206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:05.516363   12206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:50:05.512669   12206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:05.513219   12206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:05.514764   12206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:05.515265   12206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:05.516363   12206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:50:05.519777  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:50:05.519791  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:50:05.584599  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:50:05.584627  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:50:05.617086  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:50:05.617104  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:50:05.684896  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:50:05.684924  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:50:08.207248  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:50:08.218426  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:50:08.218487  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:50:08.245002  443658 cri.go:89] found id: ""
	I1014 19:50:08.245023  443658 logs.go:282] 0 containers: []
	W1014 19:50:08.245032  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:50:08.245038  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:50:08.245101  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:50:08.273388  443658 cri.go:89] found id: ""
	I1014 19:50:08.273404  443658 logs.go:282] 0 containers: []
	W1014 19:50:08.273411  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:50:08.273415  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:50:08.273470  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:50:08.301943  443658 cri.go:89] found id: ""
	I1014 19:50:08.301959  443658 logs.go:282] 0 containers: []
	W1014 19:50:08.301966  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:50:08.301971  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:50:08.302030  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:50:08.328569  443658 cri.go:89] found id: ""
	I1014 19:50:08.328587  443658 logs.go:282] 0 containers: []
	W1014 19:50:08.328594  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:50:08.328599  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:50:08.328649  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:50:08.356010  443658 cri.go:89] found id: ""
	I1014 19:50:08.356028  443658 logs.go:282] 0 containers: []
	W1014 19:50:08.356036  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:50:08.356042  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:50:08.356095  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:50:08.383392  443658 cri.go:89] found id: ""
	I1014 19:50:08.383407  443658 logs.go:282] 0 containers: []
	W1014 19:50:08.383414  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:50:08.383419  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:50:08.383469  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:50:08.410636  443658 cri.go:89] found id: ""
	I1014 19:50:08.410653  443658 logs.go:282] 0 containers: []
	W1014 19:50:08.410659  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:50:08.410667  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:50:08.410679  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:50:08.441110  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:50:08.441129  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:50:08.506036  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:50:08.506060  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:50:08.524075  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:50:08.524094  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:50:08.583708  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:50:08.576429   12348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:08.576973   12348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:08.578510   12348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:08.579066   12348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:08.580610   12348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:50:08.576429   12348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:08.576973   12348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:08.578510   12348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:08.579066   12348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:08.580610   12348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:50:08.583720  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:50:08.583740  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:50:11.145672  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:50:11.157553  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:50:11.157615  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:50:11.186767  443658 cri.go:89] found id: ""
	I1014 19:50:11.186787  443658 logs.go:282] 0 containers: []
	W1014 19:50:11.186794  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:50:11.186799  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:50:11.186858  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:50:11.216248  443658 cri.go:89] found id: ""
	I1014 19:50:11.216265  443658 logs.go:282] 0 containers: []
	W1014 19:50:11.216273  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:50:11.216278  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:50:11.216326  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:50:11.244352  443658 cri.go:89] found id: ""
	I1014 19:50:11.244375  443658 logs.go:282] 0 containers: []
	W1014 19:50:11.244384  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:50:11.244390  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:50:11.244457  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:50:11.271891  443658 cri.go:89] found id: ""
	I1014 19:50:11.271908  443658 logs.go:282] 0 containers: []
	W1014 19:50:11.271915  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:50:11.271920  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:50:11.271973  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:50:11.300619  443658 cri.go:89] found id: ""
	I1014 19:50:11.300635  443658 logs.go:282] 0 containers: []
	W1014 19:50:11.300642  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:50:11.300647  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:50:11.300724  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:50:11.327778  443658 cri.go:89] found id: ""
	I1014 19:50:11.327797  443658 logs.go:282] 0 containers: []
	W1014 19:50:11.327804  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:50:11.327809  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:50:11.327856  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:50:11.356398  443658 cri.go:89] found id: ""
	I1014 19:50:11.356416  443658 logs.go:282] 0 containers: []
	W1014 19:50:11.356425  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:50:11.356435  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:50:11.356448  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:50:11.387147  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:50:11.387172  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:50:11.456903  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:50:11.456928  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:50:11.475336  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:50:11.475358  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:50:11.533524  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:50:11.526103   12467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:11.526626   12467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:11.528173   12467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:11.528651   12467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:11.530139   12467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:50:11.526103   12467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:11.526626   12467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:11.528173   12467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:11.528651   12467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:11.530139   12467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:50:11.533537  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:50:11.533549  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:50:14.099433  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:50:14.110822  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:50:14.110894  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:50:14.137081  443658 cri.go:89] found id: ""
	I1014 19:50:14.137099  443658 logs.go:282] 0 containers: []
	W1014 19:50:14.137108  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:50:14.137115  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:50:14.137180  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:50:14.165873  443658 cri.go:89] found id: ""
	I1014 19:50:14.165893  443658 logs.go:282] 0 containers: []
	W1014 19:50:14.165917  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:50:14.165924  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:50:14.165991  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:50:14.194062  443658 cri.go:89] found id: ""
	I1014 19:50:14.194082  443658 logs.go:282] 0 containers: []
	W1014 19:50:14.194091  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:50:14.194098  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:50:14.194163  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:50:14.222120  443658 cri.go:89] found id: ""
	I1014 19:50:14.222139  443658 logs.go:282] 0 containers: []
	W1014 19:50:14.222149  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:50:14.222156  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:50:14.222239  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:50:14.249411  443658 cri.go:89] found id: ""
	I1014 19:50:14.249430  443658 logs.go:282] 0 containers: []
	W1014 19:50:14.249439  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:50:14.249444  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:50:14.249517  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:50:14.276644  443658 cri.go:89] found id: ""
	I1014 19:50:14.276661  443658 logs.go:282] 0 containers: []
	W1014 19:50:14.276668  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:50:14.276673  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:50:14.276723  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:50:14.305269  443658 cri.go:89] found id: ""
	I1014 19:50:14.305287  443658 logs.go:282] 0 containers: []
	W1014 19:50:14.305297  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:50:14.305308  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:50:14.305323  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:50:14.335633  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:50:14.335650  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:50:14.407263  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:50:14.407297  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:50:14.425952  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:50:14.425975  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:50:14.484783  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:50:14.477581   12590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:14.478203   12590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:14.479661   12590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:14.480126   12590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:14.481572   12590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:50:14.477581   12590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:14.478203   12590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:14.479661   12590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:14.480126   12590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:14.481572   12590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:50:14.484800  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:50:14.484815  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:50:17.050537  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:50:17.062166  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:50:17.062228  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:50:17.089863  443658 cri.go:89] found id: ""
	I1014 19:50:17.089883  443658 logs.go:282] 0 containers: []
	W1014 19:50:17.089893  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:50:17.089900  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:50:17.089956  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:50:17.118126  443658 cri.go:89] found id: ""
	I1014 19:50:17.118146  443658 logs.go:282] 0 containers: []
	W1014 19:50:17.118153  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:50:17.118160  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:50:17.118211  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:50:17.145473  443658 cri.go:89] found id: ""
	I1014 19:50:17.145493  443658 logs.go:282] 0 containers: []
	W1014 19:50:17.145504  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:50:17.145511  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:50:17.145563  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:50:17.173278  443658 cri.go:89] found id: ""
	I1014 19:50:17.173297  443658 logs.go:282] 0 containers: []
	W1014 19:50:17.173305  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:50:17.173310  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:50:17.173364  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:50:17.200155  443658 cri.go:89] found id: ""
	I1014 19:50:17.200175  443658 logs.go:282] 0 containers: []
	W1014 19:50:17.200183  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:50:17.200189  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:50:17.200259  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:50:17.227022  443658 cri.go:89] found id: ""
	I1014 19:50:17.227039  443658 logs.go:282] 0 containers: []
	W1014 19:50:17.227046  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:50:17.227051  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:50:17.227097  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:50:17.252693  443658 cri.go:89] found id: ""
	I1014 19:50:17.252711  443658 logs.go:282] 0 containers: []
	W1014 19:50:17.252719  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:50:17.252730  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:50:17.252771  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:50:17.284340  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:50:17.284358  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:50:17.350087  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:50:17.350110  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:50:17.367795  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:50:17.367815  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:50:17.426270  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:50:17.419190   12729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:17.419650   12729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:17.421295   12729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:17.421842   12729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:17.423058   12729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:50:17.419190   12729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:17.419650   12729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:17.421295   12729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:17.421842   12729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:17.423058   12729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:50:17.426290  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:50:17.426300  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:50:19.990063  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:50:20.001404  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:50:20.001462  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:50:20.029335  443658 cri.go:89] found id: ""
	I1014 19:50:20.029356  443658 logs.go:282] 0 containers: []
	W1014 19:50:20.029365  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:50:20.029371  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:50:20.029418  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:50:20.056226  443658 cri.go:89] found id: ""
	I1014 19:50:20.056244  443658 logs.go:282] 0 containers: []
	W1014 19:50:20.056251  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:50:20.056256  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:50:20.056303  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:50:20.085632  443658 cri.go:89] found id: ""
	I1014 19:50:20.085651  443658 logs.go:282] 0 containers: []
	W1014 19:50:20.085666  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:50:20.085674  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:50:20.085738  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:50:20.113679  443658 cri.go:89] found id: ""
	I1014 19:50:20.113699  443658 logs.go:282] 0 containers: []
	W1014 19:50:20.113717  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:50:20.113723  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:50:20.113793  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:50:20.141622  443658 cri.go:89] found id: ""
	I1014 19:50:20.141640  443658 logs.go:282] 0 containers: []
	W1014 19:50:20.141647  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:50:20.141651  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:50:20.141733  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:50:20.170013  443658 cri.go:89] found id: ""
	I1014 19:50:20.170032  443658 logs.go:282] 0 containers: []
	W1014 19:50:20.170042  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:50:20.170049  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:50:20.170106  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:50:20.198748  443658 cri.go:89] found id: ""
	I1014 19:50:20.198785  443658 logs.go:282] 0 containers: []
	W1014 19:50:20.198795  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:50:20.198806  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:50:20.198818  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:50:20.216706  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:50:20.216728  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:50:20.275300  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:50:20.267702   12828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:20.268302   12828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:20.269917   12828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:20.270346   12828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:20.272061   12828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:50:20.267702   12828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:20.268302   12828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:20.269917   12828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:20.270346   12828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:20.272061   12828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:50:20.275316  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:50:20.275329  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:50:20.340712  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:50:20.340738  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:50:20.371777  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:50:20.371799  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:50:22.939903  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:50:22.951439  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:50:22.951487  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:50:22.978695  443658 cri.go:89] found id: ""
	I1014 19:50:22.978715  443658 logs.go:282] 0 containers: []
	W1014 19:50:22.978725  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:50:22.978732  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:50:22.978808  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:50:23.005937  443658 cri.go:89] found id: ""
	I1014 19:50:23.005959  443658 logs.go:282] 0 containers: []
	W1014 19:50:23.005971  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:50:23.005978  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:50:23.006032  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:50:23.032228  443658 cri.go:89] found id: ""
	I1014 19:50:23.032247  443658 logs.go:282] 0 containers: []
	W1014 19:50:23.032257  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:50:23.032264  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:50:23.032330  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:50:23.059407  443658 cri.go:89] found id: ""
	I1014 19:50:23.059424  443658 logs.go:282] 0 containers: []
	W1014 19:50:23.059436  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:50:23.059450  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:50:23.059503  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:50:23.087490  443658 cri.go:89] found id: ""
	I1014 19:50:23.087508  443658 logs.go:282] 0 containers: []
	W1014 19:50:23.087518  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:50:23.087524  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:50:23.087588  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:50:23.116625  443658 cri.go:89] found id: ""
	I1014 19:50:23.116642  443658 logs.go:282] 0 containers: []
	W1014 19:50:23.116649  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:50:23.116654  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:50:23.116699  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:50:23.145362  443658 cri.go:89] found id: ""
	I1014 19:50:23.145379  443658 logs.go:282] 0 containers: []
	W1014 19:50:23.145388  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:50:23.145399  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:50:23.145410  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:50:23.210392  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:50:23.210420  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:50:23.242258  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:50:23.242277  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:50:23.309159  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:50:23.309186  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:50:23.327723  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:50:23.327744  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:50:23.386750  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:50:23.379457   12965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:23.380034   12965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:23.381688   12965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:23.382198   12965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:23.383449   12965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:50:23.379457   12965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:23.380034   12965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:23.381688   12965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:23.382198   12965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:23.383449   12965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:50:25.887778  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:50:25.899287  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:50:25.899359  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:50:25.928125  443658 cri.go:89] found id: ""
	I1014 19:50:25.928146  443658 logs.go:282] 0 containers: []
	W1014 19:50:25.928156  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:50:25.928162  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:50:25.928212  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:50:25.957045  443658 cri.go:89] found id: ""
	I1014 19:50:25.957061  443658 logs.go:282] 0 containers: []
	W1014 19:50:25.957068  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:50:25.957073  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:50:25.957126  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:50:25.984205  443658 cri.go:89] found id: ""
	I1014 19:50:25.984228  443658 logs.go:282] 0 containers: []
	W1014 19:50:25.984237  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:50:25.984243  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:50:25.984289  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:50:26.012054  443658 cri.go:89] found id: ""
	I1014 19:50:26.012071  443658 logs.go:282] 0 containers: []
	W1014 19:50:26.012078  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:50:26.012082  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:50:26.012128  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:50:26.040304  443658 cri.go:89] found id: ""
	I1014 19:50:26.040321  443658 logs.go:282] 0 containers: []
	W1014 19:50:26.040328  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:50:26.040332  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:50:26.040392  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:50:26.066676  443658 cri.go:89] found id: ""
	I1014 19:50:26.066696  443658 logs.go:282] 0 containers: []
	W1014 19:50:26.066705  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:50:26.066712  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:50:26.066787  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:50:26.094653  443658 cri.go:89] found id: ""
	I1014 19:50:26.094674  443658 logs.go:282] 0 containers: []
	W1014 19:50:26.094684  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:50:26.094693  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:50:26.094704  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:50:26.124447  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:50:26.124465  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:50:26.195983  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:50:26.196006  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:50:26.214895  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:50:26.214917  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:50:26.275196  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:50:26.267636   13087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:26.268258   13087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:26.269963   13087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:26.270471   13087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:26.272090   13087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:50:26.267636   13087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:26.268258   13087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:26.269963   13087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:26.270471   13087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:26.272090   13087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:50:26.275208  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:50:26.275223  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:50:28.837202  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:50:28.848579  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:50:28.848634  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:50:28.875162  443658 cri.go:89] found id: ""
	I1014 19:50:28.875182  443658 logs.go:282] 0 containers: []
	W1014 19:50:28.875194  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:50:28.875200  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:50:28.875254  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:50:28.903438  443658 cri.go:89] found id: ""
	I1014 19:50:28.903455  443658 logs.go:282] 0 containers: []
	W1014 19:50:28.903462  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:50:28.903467  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:50:28.903520  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:50:28.931290  443658 cri.go:89] found id: ""
	I1014 19:50:28.931307  443658 logs.go:282] 0 containers: []
	W1014 19:50:28.931314  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:50:28.931319  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:50:28.931365  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:50:28.958813  443658 cri.go:89] found id: ""
	I1014 19:50:28.958831  443658 logs.go:282] 0 containers: []
	W1014 19:50:28.958838  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:50:28.958843  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:50:28.958894  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:50:28.984686  443658 cri.go:89] found id: ""
	I1014 19:50:28.984704  443658 logs.go:282] 0 containers: []
	W1014 19:50:28.984711  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:50:28.984718  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:50:28.984783  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:50:29.012142  443658 cri.go:89] found id: ""
	I1014 19:50:29.012161  443658 logs.go:282] 0 containers: []
	W1014 19:50:29.012172  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:50:29.012183  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:50:29.012238  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:50:29.038850  443658 cri.go:89] found id: ""
	I1014 19:50:29.038870  443658 logs.go:282] 0 containers: []
	W1014 19:50:29.038880  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:50:29.038891  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:50:29.038902  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:50:29.069928  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:50:29.069967  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:50:29.138190  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:50:29.138214  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:50:29.156875  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:50:29.156904  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:50:29.216410  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:50:29.208955   13215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:29.209524   13215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:29.211285   13215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:29.211710   13215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:29.213259   13215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:50:29.208955   13215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:29.209524   13215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:29.211285   13215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:29.211710   13215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:29.213259   13215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:50:29.216425  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:50:29.216442  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:50:31.781917  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:50:31.793447  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:50:31.793505  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:50:31.821136  443658 cri.go:89] found id: ""
	I1014 19:50:31.821153  443658 logs.go:282] 0 containers: []
	W1014 19:50:31.821160  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:50:31.821165  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:50:31.821214  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:50:31.849490  443658 cri.go:89] found id: ""
	I1014 19:50:31.849508  443658 logs.go:282] 0 containers: []
	W1014 19:50:31.849515  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:50:31.849520  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:50:31.849573  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:50:31.876743  443658 cri.go:89] found id: ""
	I1014 19:50:31.876777  443658 logs.go:282] 0 containers: []
	W1014 19:50:31.876785  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:50:31.876790  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:50:31.876842  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:50:31.905558  443658 cri.go:89] found id: ""
	I1014 19:50:31.905576  443658 logs.go:282] 0 containers: []
	W1014 19:50:31.905584  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:50:31.905591  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:50:31.905654  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:50:31.934155  443658 cri.go:89] found id: ""
	I1014 19:50:31.934174  443658 logs.go:282] 0 containers: []
	W1014 19:50:31.934185  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:50:31.934191  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:50:31.934252  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:50:31.961840  443658 cri.go:89] found id: ""
	I1014 19:50:31.961857  443658 logs.go:282] 0 containers: []
	W1014 19:50:31.961870  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:50:31.961875  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:50:31.961924  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:50:31.989285  443658 cri.go:89] found id: ""
	I1014 19:50:31.989306  443658 logs.go:282] 0 containers: []
	W1014 19:50:31.989317  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:50:31.989330  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:50:31.989341  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:50:32.061358  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:50:32.061382  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:50:32.080223  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:50:32.080243  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:50:32.142648  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:50:32.134637   13329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:32.135263   13329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:32.137075   13329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:32.137669   13329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:32.139334   13329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:50:32.134637   13329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:32.135263   13329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:32.137075   13329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:32.137669   13329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:32.139334   13329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:50:32.142684  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:50:32.142699  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:50:32.209500  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:50:32.209528  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:50:34.742153  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:50:34.753291  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:50:34.753345  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:50:34.781021  443658 cri.go:89] found id: ""
	I1014 19:50:34.781038  443658 logs.go:282] 0 containers: []
	W1014 19:50:34.781045  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:50:34.781050  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:50:34.781097  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:50:34.807324  443658 cri.go:89] found id: ""
	I1014 19:50:34.807341  443658 logs.go:282] 0 containers: []
	W1014 19:50:34.807349  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:50:34.807354  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:50:34.807402  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:50:34.834727  443658 cri.go:89] found id: ""
	I1014 19:50:34.834748  443658 logs.go:282] 0 containers: []
	W1014 19:50:34.834771  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:50:34.834778  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:50:34.834833  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:50:34.861999  443658 cri.go:89] found id: ""
	I1014 19:50:34.862019  443658 logs.go:282] 0 containers: []
	W1014 19:50:34.862031  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:50:34.862037  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:50:34.862087  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:50:34.889667  443658 cri.go:89] found id: ""
	I1014 19:50:34.889684  443658 logs.go:282] 0 containers: []
	W1014 19:50:34.889690  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:50:34.889694  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:50:34.889742  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:50:34.916811  443658 cri.go:89] found id: ""
	I1014 19:50:34.916828  443658 logs.go:282] 0 containers: []
	W1014 19:50:34.916834  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:50:34.916840  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:50:34.916899  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:50:34.944926  443658 cri.go:89] found id: ""
	I1014 19:50:34.944943  443658 logs.go:282] 0 containers: []
	W1014 19:50:34.944951  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:50:34.944959  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:50:34.944973  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:50:35.013004  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:50:35.013029  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:50:35.030877  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:50:35.030903  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:50:35.089384  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:50:35.081483   13442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:35.082170   13442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:35.083809   13442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:35.084270   13442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:35.085889   13442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:50:35.081483   13442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:35.082170   13442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:35.083809   13442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:35.084270   13442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:35.085889   13442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:50:35.089398  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:50:35.089409  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:50:35.149874  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:50:35.149899  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:50:37.684070  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:50:37.695415  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:50:37.695469  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:50:37.723582  443658 cri.go:89] found id: ""
	I1014 19:50:37.723598  443658 logs.go:282] 0 containers: []
	W1014 19:50:37.723605  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:50:37.723611  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:50:37.723688  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:50:37.751328  443658 cri.go:89] found id: ""
	I1014 19:50:37.751347  443658 logs.go:282] 0 containers: []
	W1014 19:50:37.751354  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:50:37.751363  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:50:37.751410  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:50:37.779279  443658 cri.go:89] found id: ""
	I1014 19:50:37.779300  443658 logs.go:282] 0 containers: []
	W1014 19:50:37.779311  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:50:37.779317  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:50:37.779392  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:50:37.806937  443658 cri.go:89] found id: ""
	I1014 19:50:37.806954  443658 logs.go:282] 0 containers: []
	W1014 19:50:37.806974  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:50:37.806979  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:50:37.807028  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:50:37.834418  443658 cri.go:89] found id: ""
	I1014 19:50:37.834435  443658 logs.go:282] 0 containers: []
	W1014 19:50:37.834442  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:50:37.834447  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:50:37.834495  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:50:37.861687  443658 cri.go:89] found id: ""
	I1014 19:50:37.861705  443658 logs.go:282] 0 containers: []
	W1014 19:50:37.861712  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:50:37.861719  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:50:37.861791  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:50:37.889605  443658 cri.go:89] found id: ""
	I1014 19:50:37.889622  443658 logs.go:282] 0 containers: []
	W1014 19:50:37.889628  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:50:37.889637  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:50:37.889648  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:50:37.954899  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:50:37.954928  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:50:37.988108  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:50:37.988128  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:50:38.058132  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:50:38.058158  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:50:38.076773  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:50:38.076795  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:50:38.135957  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:50:38.127889   13576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:38.128350   13576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:38.130577   13576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:38.131078   13576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:38.132629   13576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:50:38.127889   13576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:38.128350   13576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:38.130577   13576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:38.131078   13576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:38.132629   13576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:50:40.636752  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:50:40.647999  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:50:40.648055  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:50:40.674081  443658 cri.go:89] found id: ""
	I1014 19:50:40.674099  443658 logs.go:282] 0 containers: []
	W1014 19:50:40.674107  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:50:40.674112  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:50:40.674160  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:50:40.701160  443658 cri.go:89] found id: ""
	I1014 19:50:40.701177  443658 logs.go:282] 0 containers: []
	W1014 19:50:40.701184  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:50:40.701189  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:50:40.701252  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:50:40.728441  443658 cri.go:89] found id: ""
	I1014 19:50:40.728462  443658 logs.go:282] 0 containers: []
	W1014 19:50:40.728472  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:50:40.728480  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:50:40.728527  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:50:40.756302  443658 cri.go:89] found id: ""
	I1014 19:50:40.756318  443658 logs.go:282] 0 containers: []
	W1014 19:50:40.756325  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:50:40.756330  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:50:40.756375  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:50:40.782665  443658 cri.go:89] found id: ""
	I1014 19:50:40.782682  443658 logs.go:282] 0 containers: []
	W1014 19:50:40.782721  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:50:40.782727  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:50:40.782808  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:50:40.809993  443658 cri.go:89] found id: ""
	I1014 19:50:40.810011  443658 logs.go:282] 0 containers: []
	W1014 19:50:40.810017  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:50:40.810022  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:50:40.810081  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:50:40.837750  443658 cri.go:89] found id: ""
	I1014 19:50:40.837785  443658 logs.go:282] 0 containers: []
	W1014 19:50:40.837795  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:50:40.837805  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:50:40.837816  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:50:40.905565  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:50:40.905598  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:50:40.923794  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:50:40.923817  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:50:40.982479  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:50:40.975467   13681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:40.976110   13681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:40.977609   13681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:40.978094   13681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:40.979129   13681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:50:40.975467   13681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:40.976110   13681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:40.977609   13681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:40.978094   13681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:40.979129   13681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:50:40.982490  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:50:40.982503  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:50:41.043844  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:50:41.043869  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:50:43.575810  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:50:43.587076  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:50:43.587126  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:50:43.613973  443658 cri.go:89] found id: ""
	I1014 19:50:43.613992  443658 logs.go:282] 0 containers: []
	W1014 19:50:43.614001  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:50:43.614007  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:50:43.614062  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:50:43.641631  443658 cri.go:89] found id: ""
	I1014 19:50:43.641649  443658 logs.go:282] 0 containers: []
	W1014 19:50:43.641655  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:50:43.641662  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:50:43.641740  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:50:43.668838  443658 cri.go:89] found id: ""
	I1014 19:50:43.668853  443658 logs.go:282] 0 containers: []
	W1014 19:50:43.668860  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:50:43.668865  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:50:43.668912  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:50:43.696427  443658 cri.go:89] found id: ""
	I1014 19:50:43.696447  443658 logs.go:282] 0 containers: []
	W1014 19:50:43.696457  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:50:43.696464  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:50:43.696515  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:50:43.723629  443658 cri.go:89] found id: ""
	I1014 19:50:43.723646  443658 logs.go:282] 0 containers: []
	W1014 19:50:43.723652  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:50:43.723657  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:50:43.723738  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:50:43.750543  443658 cri.go:89] found id: ""
	I1014 19:50:43.750564  443658 logs.go:282] 0 containers: []
	W1014 19:50:43.750573  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:50:43.750579  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:50:43.750630  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:50:43.777077  443658 cri.go:89] found id: ""
	I1014 19:50:43.777094  443658 logs.go:282] 0 containers: []
	W1014 19:50:43.777100  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:50:43.777109  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:50:43.777123  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:50:43.847663  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:50:43.847745  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:50:43.865887  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:50:43.865906  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:50:43.924883  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:50:43.917622   13820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:43.918218   13820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:43.919830   13820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:43.920193   13820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:43.921570   13820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:50:43.917622   13820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:43.918218   13820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:43.919830   13820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:43.920193   13820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:43.921570   13820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:50:43.924899  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:50:43.924910  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:50:43.985909  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:50:43.985934  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:50:46.519152  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:50:46.530574  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:50:46.530626  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:50:46.557422  443658 cri.go:89] found id: ""
	I1014 19:50:46.557437  443658 logs.go:282] 0 containers: []
	W1014 19:50:46.557443  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:50:46.557448  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:50:46.557494  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:50:46.584670  443658 cri.go:89] found id: ""
	I1014 19:50:46.584690  443658 logs.go:282] 0 containers: []
	W1014 19:50:46.584699  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:50:46.584704  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:50:46.584777  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:50:46.611880  443658 cri.go:89] found id: ""
	I1014 19:50:46.611898  443658 logs.go:282] 0 containers: []
	W1014 19:50:46.611905  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:50:46.611912  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:50:46.611961  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:50:46.639343  443658 cri.go:89] found id: ""
	I1014 19:50:46.639358  443658 logs.go:282] 0 containers: []
	W1014 19:50:46.639365  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:50:46.639370  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:50:46.639420  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:50:46.667657  443658 cri.go:89] found id: ""
	I1014 19:50:46.667677  443658 logs.go:282] 0 containers: []
	W1014 19:50:46.667686  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:50:46.667693  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:50:46.667751  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:50:46.694195  443658 cri.go:89] found id: ""
	I1014 19:50:46.694218  443658 logs.go:282] 0 containers: []
	W1014 19:50:46.694228  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:50:46.694234  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:50:46.694288  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:50:46.723852  443658 cri.go:89] found id: ""
	I1014 19:50:46.723873  443658 logs.go:282] 0 containers: []
	W1014 19:50:46.723883  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:50:46.723893  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:50:46.723911  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:50:46.795594  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:50:46.795617  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:50:46.813986  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:50:46.814005  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:50:46.874107  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:50:46.866264   13938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:46.866806   13938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:46.868435   13938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:46.868992   13938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:46.870716   13938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:50:46.866264   13938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:46.866806   13938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:46.868435   13938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:46.868992   13938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:46.870716   13938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:50:46.874123  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:50:46.874137  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:50:46.939214  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:50:46.939239  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:50:49.472291  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:50:49.483645  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:50:49.483703  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:50:49.512485  443658 cri.go:89] found id: ""
	I1014 19:50:49.512508  443658 logs.go:282] 0 containers: []
	W1014 19:50:49.512519  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:50:49.512526  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:50:49.512579  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:50:49.541986  443658 cri.go:89] found id: ""
	I1014 19:50:49.542003  443658 logs.go:282] 0 containers: []
	W1014 19:50:49.542010  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:50:49.542015  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:50:49.542062  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:50:49.568820  443658 cri.go:89] found id: ""
	I1014 19:50:49.568837  443658 logs.go:282] 0 containers: []
	W1014 19:50:49.568843  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:50:49.568848  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:50:49.568904  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:50:49.595650  443658 cri.go:89] found id: ""
	I1014 19:50:49.595667  443658 logs.go:282] 0 containers: []
	W1014 19:50:49.595674  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:50:49.595679  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:50:49.595738  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:50:49.624580  443658 cri.go:89] found id: ""
	I1014 19:50:49.624597  443658 logs.go:282] 0 containers: []
	W1014 19:50:49.624604  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:50:49.624610  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:50:49.624668  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:50:49.651849  443658 cri.go:89] found id: ""
	I1014 19:50:49.651871  443658 logs.go:282] 0 containers: []
	W1014 19:50:49.651881  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:50:49.651888  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:50:49.651942  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:50:49.679343  443658 cri.go:89] found id: ""
	I1014 19:50:49.679361  443658 logs.go:282] 0 containers: []
	W1014 19:50:49.679369  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:50:49.679378  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:50:49.679390  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:50:49.710667  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:50:49.710688  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:50:49.779683  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:50:49.779708  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:50:49.797614  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:50:49.797632  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:50:49.858709  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:50:49.850102   14071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:49.850643   14071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:49.853179   14071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:49.853667   14071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:49.855254   14071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:50:49.850102   14071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:49.850643   14071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:49.853179   14071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:49.853667   14071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:49.855254   14071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:50:49.858721  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:50:49.858734  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:50:52.425201  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:50:52.437033  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:50:52.437091  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:50:52.464814  443658 cri.go:89] found id: ""
	I1014 19:50:52.464835  443658 logs.go:282] 0 containers: []
	W1014 19:50:52.464845  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:50:52.464852  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:50:52.464920  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:50:52.493108  443658 cri.go:89] found id: ""
	I1014 19:50:52.493128  443658 logs.go:282] 0 containers: []
	W1014 19:50:52.493141  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:50:52.493147  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:50:52.493206  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:50:52.520875  443658 cri.go:89] found id: ""
	I1014 19:50:52.520896  443658 logs.go:282] 0 containers: []
	W1014 19:50:52.520905  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:50:52.520912  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:50:52.520971  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:50:52.548477  443658 cri.go:89] found id: ""
	I1014 19:50:52.548496  443658 logs.go:282] 0 containers: []
	W1014 19:50:52.548503  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:50:52.548509  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:50:52.548571  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:50:52.576240  443658 cri.go:89] found id: ""
	I1014 19:50:52.576260  443658 logs.go:282] 0 containers: []
	W1014 19:50:52.576272  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:50:52.576278  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:50:52.576345  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:50:52.604501  443658 cri.go:89] found id: ""
	I1014 19:50:52.604519  443658 logs.go:282] 0 containers: []
	W1014 19:50:52.604529  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:50:52.604535  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:50:52.604605  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:50:52.636730  443658 cri.go:89] found id: ""
	I1014 19:50:52.636746  443658 logs.go:282] 0 containers: []
	W1014 19:50:52.636777  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:50:52.636789  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:50:52.636802  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:50:52.708243  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:50:52.708275  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:50:52.726867  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:50:52.726890  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:50:52.785730  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:50:52.778588   14184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:52.779176   14184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:52.780807   14184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:52.781257   14184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:52.782451   14184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:50:52.778588   14184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:52.779176   14184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:52.780807   14184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:52.781257   14184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:52.782451   14184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:50:52.785743  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:50:52.785783  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:50:52.849671  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:50:52.849695  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:50:55.381592  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:50:55.393025  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:50:55.393093  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:50:55.422130  443658 cri.go:89] found id: ""
	I1014 19:50:55.422150  443658 logs.go:282] 0 containers: []
	W1014 19:50:55.422159  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:50:55.422166  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:50:55.422225  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:50:55.449578  443658 cri.go:89] found id: ""
	I1014 19:50:55.449593  443658 logs.go:282] 0 containers: []
	W1014 19:50:55.449599  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:50:55.449606  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:50:55.449652  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:50:55.478330  443658 cri.go:89] found id: ""
	I1014 19:50:55.478349  443658 logs.go:282] 0 containers: []
	W1014 19:50:55.478359  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:50:55.478366  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:50:55.478418  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:50:55.506046  443658 cri.go:89] found id: ""
	I1014 19:50:55.506062  443658 logs.go:282] 0 containers: []
	W1014 19:50:55.506069  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:50:55.506075  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:50:55.506121  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:50:55.533431  443658 cri.go:89] found id: ""
	I1014 19:50:55.533448  443658 logs.go:282] 0 containers: []
	W1014 19:50:55.533460  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:50:55.533464  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:50:55.533512  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:50:55.559554  443658 cri.go:89] found id: ""
	I1014 19:50:55.559571  443658 logs.go:282] 0 containers: []
	W1014 19:50:55.559579  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:50:55.559583  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:50:55.559628  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:50:55.586490  443658 cri.go:89] found id: ""
	I1014 19:50:55.586506  443658 logs.go:282] 0 containers: []
	W1014 19:50:55.586513  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:50:55.586522  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:50:55.586533  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:50:55.654422  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:50:55.654447  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:50:55.673174  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:50:55.673195  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:50:55.732549  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:50:55.725166   14316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:55.725836   14316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:55.727380   14316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:55.727867   14316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:55.729272   14316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:50:55.725166   14316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:55.725836   14316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:55.727380   14316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:55.727867   14316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:55.729272   14316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:50:55.732565  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:50:55.732578  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:50:55.798718  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:50:55.798747  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:50:58.332284  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:50:58.343801  443658 kubeadm.go:601] duration metric: took 4m4.243920348s to restartPrimaryControlPlane
	W1014 19:50:58.343901  443658 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1014 19:50:58.344005  443658 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1014 19:50:58.799455  443658 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 19:50:58.813683  443658 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 19:50:58.822431  443658 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1014 19:50:58.822479  443658 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 19:50:58.830731  443658 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 19:50:58.830743  443658 kubeadm.go:157] found existing configuration files:
	
	I1014 19:50:58.830813  443658 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1014 19:50:58.838788  443658 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 19:50:58.838843  443658 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 19:50:58.846629  443658 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1014 19:50:58.854899  443658 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 19:50:58.854960  443658 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 19:50:58.862796  443658 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1014 19:50:58.870845  443658 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 19:50:58.870900  443658 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 19:50:58.878602  443658 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1014 19:50:58.886687  443658 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 19:50:58.886812  443658 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 19:50:58.894706  443658 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1014 19:50:58.956049  443658 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1014 19:50:59.017911  443658 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1014 19:55:01.512196  443658 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused]
	I1014 19:55:01.512300  443658 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1014 19:55:01.515811  443658 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1014 19:55:01.515863  443658 kubeadm.go:318] [preflight] Running pre-flight checks
	I1014 19:55:01.515937  443658 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1014 19:55:01.515981  443658 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1014 19:55:01.516011  443658 kubeadm.go:318] OS: Linux
	I1014 19:55:01.516049  443658 kubeadm.go:318] CGROUPS_CPU: enabled
	I1014 19:55:01.516087  443658 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1014 19:55:01.516133  443658 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1014 19:55:01.516172  443658 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1014 19:55:01.516210  443658 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1014 19:55:01.516249  443658 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1014 19:55:01.516288  443658 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1014 19:55:01.516322  443658 kubeadm.go:318] CGROUPS_IO: enabled
	I1014 19:55:01.516431  443658 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1014 19:55:01.516587  443658 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1014 19:55:01.516701  443658 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1014 19:55:01.516795  443658 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1014 19:55:01.519360  443658 out.go:252]   - Generating certificates and keys ...
	I1014 19:55:01.519469  443658 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1014 19:55:01.519557  443658 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1014 19:55:01.519666  443658 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1014 19:55:01.519744  443658 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1014 19:55:01.519850  443658 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1014 19:55:01.519914  443658 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1014 19:55:01.519978  443658 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1014 19:55:01.520034  443658 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1014 19:55:01.520097  443658 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1014 19:55:01.520167  443658 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1014 19:55:01.520203  443658 kubeadm.go:318] [certs] Using the existing "sa" key
	I1014 19:55:01.520251  443658 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1014 19:55:01.520299  443658 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1014 19:55:01.520348  443658 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1014 19:55:01.520393  443658 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1014 19:55:01.520450  443658 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1014 19:55:01.520499  443658 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1014 19:55:01.520576  443658 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1014 19:55:01.520641  443658 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1014 19:55:01.523229  443658 out.go:252]   - Booting up control plane ...
	I1014 19:55:01.523319  443658 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1014 19:55:01.523390  443658 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1014 19:55:01.523444  443658 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1014 19:55:01.523551  443658 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1014 19:55:01.523641  443658 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1014 19:55:01.523810  443658 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1014 19:55:01.523922  443658 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1014 19:55:01.523954  443658 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1014 19:55:01.524086  443658 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1014 19:55:01.524181  443658 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1014 19:55:01.524234  443658 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.568458ms
	I1014 19:55:01.524321  443658 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1014 19:55:01.524389  443658 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	I1014 19:55:01.524486  443658 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1014 19:55:01.524591  443658 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1014 19:55:01.524662  443658 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000296304s
	I1014 19:55:01.524728  443658 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000399838s
	I1014 19:55:01.524840  443658 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000393905s
	I1014 19:55:01.524843  443658 kubeadm.go:318] 
	I1014 19:55:01.524928  443658 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1014 19:55:01.525021  443658 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1014 19:55:01.525148  443658 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1014 19:55:01.525276  443658 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1014 19:55:01.525390  443658 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1014 19:55:01.525475  443658 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1014 19:55:01.525507  443658 kubeadm.go:318] 
	W1014 19:55:01.525679  443658 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.568458ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000296304s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000399838s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000393905s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1014 19:55:01.525798  443658 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1014 19:55:01.982887  443658 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 19:55:01.996173  443658 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1014 19:55:01.996227  443658 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 19:55:02.004750  443658 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 19:55:02.004776  443658 kubeadm.go:157] found existing configuration files:
	
	I1014 19:55:02.004817  443658 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1014 19:55:02.013003  443658 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 19:55:02.013070  443658 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 19:55:02.021099  443658 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1014 19:55:02.029431  443658 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 19:55:02.029492  443658 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 19:55:02.037121  443658 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1014 19:55:02.045152  443658 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 19:55:02.045198  443658 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 19:55:02.052887  443658 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1014 19:55:02.060584  443658 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 19:55:02.060626  443658 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 19:55:02.068308  443658 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1014 19:55:02.126727  443658 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1014 19:55:02.188353  443658 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1014 19:59:05.052390  443658 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1014 19:59:05.052568  443658 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1014 19:59:05.055525  443658 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1014 19:59:05.055579  443658 kubeadm.go:318] [preflight] Running pre-flight checks
	I1014 19:59:05.055669  443658 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1014 19:59:05.055719  443658 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1014 19:59:05.055746  443658 kubeadm.go:318] OS: Linux
	I1014 19:59:05.055802  443658 kubeadm.go:318] CGROUPS_CPU: enabled
	I1014 19:59:05.055840  443658 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1014 19:59:05.055878  443658 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1014 19:59:05.055926  443658 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1014 19:59:05.055963  443658 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1014 19:59:05.056004  443658 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1014 19:59:05.056049  443658 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1014 19:59:05.056084  443658 kubeadm.go:318] CGROUPS_IO: enabled
	I1014 19:59:05.056142  443658 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1014 19:59:05.056223  443658 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1014 19:59:05.056299  443658 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1014 19:59:05.056392  443658 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1014 19:59:05.059274  443658 out.go:252]   - Generating certificates and keys ...
	I1014 19:59:05.059351  443658 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1014 19:59:05.059415  443658 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1014 19:59:05.059493  443658 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1014 19:59:05.059567  443658 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1014 19:59:05.059629  443658 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1014 19:59:05.059672  443658 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1014 19:59:05.059751  443658 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1014 19:59:05.059826  443658 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1014 19:59:05.059887  443658 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1014 19:59:05.059966  443658 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1014 19:59:05.060015  443658 kubeadm.go:318] [certs] Using the existing "sa" key
	I1014 19:59:05.060080  443658 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1014 19:59:05.060144  443658 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1014 19:59:05.060195  443658 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1014 19:59:05.060238  443658 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1014 19:59:05.060288  443658 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1014 19:59:05.060337  443658 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1014 19:59:05.060403  443658 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1014 19:59:05.060483  443658 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1014 19:59:05.061914  443658 out.go:252]   - Booting up control plane ...
	I1014 19:59:05.062009  443658 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1014 19:59:05.062118  443658 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1014 19:59:05.062251  443658 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1014 19:59:05.062371  443658 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1014 19:59:05.062470  443658 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1014 19:59:05.062594  443658 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1014 19:59:05.062668  443658 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1014 19:59:05.062709  443658 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1014 19:59:05.062894  443658 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1014 19:59:05.063001  443658 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1014 19:59:05.063067  443658 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001430917s
	I1014 19:59:05.063161  443658 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1014 19:59:05.063245  443658 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	I1014 19:59:05.063317  443658 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1014 19:59:05.063385  443658 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1014 19:59:05.063443  443658 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000444633s
	I1014 19:59:05.063502  443658 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000481701s
	I1014 19:59:05.063588  443658 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000675884s
	I1014 19:59:05.063599  443658 kubeadm.go:318] 
	I1014 19:59:05.063715  443658 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1014 19:59:05.063820  443658 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1014 19:59:05.063899  443658 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1014 19:59:05.064013  443658 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1014 19:59:05.064087  443658 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1014 19:59:05.064169  443658 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1014 19:59:05.064205  443658 kubeadm.go:318] 
	I1014 19:59:05.064256  443658 kubeadm.go:402] duration metric: took 12m11.001770383s to StartCluster
	I1014 19:59:05.064319  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:59:05.064377  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:59:05.094590  443658 cri.go:89] found id: ""
	I1014 19:59:05.094608  443658 logs.go:282] 0 containers: []
	W1014 19:59:05.094615  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:59:05.094620  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:59:05.094695  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:59:05.123951  443658 cri.go:89] found id: ""
	I1014 19:59:05.123969  443658 logs.go:282] 0 containers: []
	W1014 19:59:05.123989  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:59:05.123996  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:59:05.124057  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:59:05.153788  443658 cri.go:89] found id: ""
	I1014 19:59:05.153806  443658 logs.go:282] 0 containers: []
	W1014 19:59:05.153813  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:59:05.153818  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:59:05.153866  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:59:05.182209  443658 cri.go:89] found id: ""
	I1014 19:59:05.182227  443658 logs.go:282] 0 containers: []
	W1014 19:59:05.182233  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:59:05.182239  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:59:05.182295  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:59:05.211682  443658 cri.go:89] found id: ""
	I1014 19:59:05.211743  443658 logs.go:282] 0 containers: []
	W1014 19:59:05.211773  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:59:05.211787  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:59:05.211840  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:59:05.239904  443658 cri.go:89] found id: ""
	I1014 19:59:05.239927  443658 logs.go:282] 0 containers: []
	W1014 19:59:05.239935  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:59:05.239942  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:59:05.239993  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:59:05.266617  443658 cri.go:89] found id: ""
	I1014 19:59:05.266636  443658 logs.go:282] 0 containers: []
	W1014 19:59:05.266643  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:59:05.266710  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:59:05.266747  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:59:05.284891  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:59:05.284919  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:59:05.345910  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:59:05.338670   15637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:59:05.339278   15637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:59:05.340773   15637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:59:05.341189   15637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:59:05.342723   15637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:59:05.338670   15637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:59:05.339278   15637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:59:05.340773   15637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:59:05.341189   15637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:59:05.342723   15637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:59:05.345933  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:59:05.345953  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:59:05.410981  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:59:05.411011  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:59:05.441593  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:59:05.441611  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1014 19:59:05.511762  443658 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001430917s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000444633s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000481701s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000675884s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1014 19:59:05.511841  443658 out.go:285] * 
	W1014 19:59:05.511933  443658 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001430917s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000444633s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000481701s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000675884s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1014 19:59:05.511948  443658 out.go:285] * 
	W1014 19:59:05.513702  443658 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 19:59:05.517408  443658 out.go:203] 
	W1014 19:59:05.518938  443658 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001430917s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000444633s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000481701s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000675884s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1014 19:59:05.518965  443658 out.go:285] * 
	I1014 19:59:05.520443  443658 out.go:203] 
	
	
	==> CRI-O <==
	Oct 14 19:58:58 functional-744288 crio[5849]: time="2025-10-14T19:58:58.636277209Z" level=info msg="createCtr: removing container 65f0dc73ec9ea69e31501c976b4433418c103bbf0b3ac355e8829c0387caf4fa" id=f3386d3c-bc60-4033-afdf-c1e91baa2cb1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:58:58 functional-744288 crio[5849]: time="2025-10-14T19:58:58.636310381Z" level=info msg="createCtr: deleting container 65f0dc73ec9ea69e31501c976b4433418c103bbf0b3ac355e8829c0387caf4fa from storage" id=f3386d3c-bc60-4033-afdf-c1e91baa2cb1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:58:58 functional-744288 crio[5849]: time="2025-10-14T19:58:58.638326871Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-functional-744288_kube-system_b1fd55382fcf5a735f17d7c6c4ddad91_0" id=f3386d3c-bc60-4033-afdf-c1e91baa2cb1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:58:59 functional-744288 crio[5849]: time="2025-10-14T19:58:59.611499521Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=1d103bae-e0cd-43b6-a8b9-21dbf6ee25eb name=/runtime.v1.ImageService/ImageStatus
	Oct 14 19:58:59 functional-744288 crio[5849]: time="2025-10-14T19:58:59.612463811Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=5ccb7435-2feb-4843-b580-b73b2136ca02 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 19:58:59 functional-744288 crio[5849]: time="2025-10-14T19:58:59.613443601Z" level=info msg="Creating container: kube-system/kube-apiserver-functional-744288/kube-apiserver" id=e84811a9-7a59-4792-8343-435666edc285 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:58:59 functional-744288 crio[5849]: time="2025-10-14T19:58:59.613680478Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 19:58:59 functional-744288 crio[5849]: time="2025-10-14T19:58:59.618067642Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 19:58:59 functional-744288 crio[5849]: time="2025-10-14T19:58:59.6186241Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 19:58:59 functional-744288 crio[5849]: time="2025-10-14T19:58:59.638374963Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=e84811a9-7a59-4792-8343-435666edc285 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:58:59 functional-744288 crio[5849]: time="2025-10-14T19:58:59.640112713Z" level=info msg="createCtr: deleting container ID e6170d040b69887f4e204511d672261a5b0442c88d3d9199109a75deab8a7473 from idIndex" id=e84811a9-7a59-4792-8343-435666edc285 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:58:59 functional-744288 crio[5849]: time="2025-10-14T19:58:59.640173593Z" level=info msg="createCtr: removing container e6170d040b69887f4e204511d672261a5b0442c88d3d9199109a75deab8a7473" id=e84811a9-7a59-4792-8343-435666edc285 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:58:59 functional-744288 crio[5849]: time="2025-10-14T19:58:59.640224188Z" level=info msg="createCtr: deleting container e6170d040b69887f4e204511d672261a5b0442c88d3d9199109a75deab8a7473 from storage" id=e84811a9-7a59-4792-8343-435666edc285 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:58:59 functional-744288 crio[5849]: time="2025-10-14T19:58:59.642655814Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-functional-744288_kube-system_5ce31098ce493b77069c880f0c6ac8e6_0" id=e84811a9-7a59-4792-8343-435666edc285 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:59:06 functional-744288 crio[5849]: time="2025-10-14T19:59:06.610817996Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=372c4d4e-6b47-4045-8ba6-b6b7e22a7cf5 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 19:59:06 functional-744288 crio[5849]: time="2025-10-14T19:59:06.611997294Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=d4bbc804-55a9-4018-bb4f-cabaff200ebf name=/runtime.v1.ImageService/ImageStatus
	Oct 14 19:59:06 functional-744288 crio[5849]: time="2025-10-14T19:59:06.613018254Z" level=info msg="Creating container: kube-system/kube-scheduler-functional-744288/kube-scheduler" id=2f1ab279-d80f-4567-a165-3cd4a2d97179 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:59:06 functional-744288 crio[5849]: time="2025-10-14T19:59:06.613300745Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 19:59:06 functional-744288 crio[5849]: time="2025-10-14T19:59:06.617547351Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 19:59:06 functional-744288 crio[5849]: time="2025-10-14T19:59:06.618068516Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 19:59:06 functional-744288 crio[5849]: time="2025-10-14T19:59:06.6344609Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=2f1ab279-d80f-4567-a165-3cd4a2d97179 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:59:06 functional-744288 crio[5849]: time="2025-10-14T19:59:06.636163299Z" level=info msg="createCtr: deleting container ID ba1635cefc5bd2a1316a1e192404195915ecc834a816f34cf5fe7882411e473d from idIndex" id=2f1ab279-d80f-4567-a165-3cd4a2d97179 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:59:06 functional-744288 crio[5849]: time="2025-10-14T19:59:06.636220995Z" level=info msg="createCtr: removing container ba1635cefc5bd2a1316a1e192404195915ecc834a816f34cf5fe7882411e473d" id=2f1ab279-d80f-4567-a165-3cd4a2d97179 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:59:06 functional-744288 crio[5849]: time="2025-10-14T19:59:06.63626981Z" level=info msg="createCtr: deleting container ba1635cefc5bd2a1316a1e192404195915ecc834a816f34cf5fe7882411e473d from storage" id=2f1ab279-d80f-4567-a165-3cd4a2d97179 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:59:06 functional-744288 crio[5849]: time="2025-10-14T19:59:06.639472602Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-functional-744288_kube-system_e9679524bf37cc2b727411d0e5a93bfe_0" id=2f1ab279-d80f-4567-a165-3cd4a2d97179 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:59:08.686630   15946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:59:08.687150   15946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:59:08.688750   15946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:59:08.689241   15946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:59:08.690818   15946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 91 42 a8 46 f5 08 06
	[ +33.952077] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff aa 17 60 38 7c 63 08 06
	[  +0.961658] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ba 77 81 98 05 a1 08 06
	[  +0.043334] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 8a 2f 57 55 4c d7 08 06
	[  +4.681081] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f6 ac 51 a4 b2 5a 08 06
	[Oct14 19:04] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e6 d1 de e1 85 25 08 06
	[  +0.899784] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ae f2 fe 51 3d 95 08 06
	[  +0.040749] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 82 c6 36 89 b8 4a 08 06
	[  +4.162603] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c6 62 d5 5c 0e ef 08 06
	[ +30.801371] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 06 ae e5 28 c7 39 08 06
	[  +0.817897] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ae 5e 5e 90 62 1a 08 06
	[  +0.046959] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 96 8f 1b bd b9 9f 08 06
	[Oct14 19:05] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 56 59 c3 7c db c4 08 06
	
	
	==> kernel <==
	 19:59:08 up  2:41,  0 user,  load average: 0.31, 0.15, 1.10
	Linux functional-744288 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 14 19:58:59 functional-744288 kubelet[15039]: E1014 19:58:59.611005   15039 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-744288\" not found" node="functional-744288"
	Oct 14 19:58:59 functional-744288 kubelet[15039]: E1014 19:58:59.643037   15039 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 14 19:58:59 functional-744288 kubelet[15039]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 14 19:58:59 functional-744288 kubelet[15039]:  > podSandboxID="3c54d3192ed1a94339d7aeaa1e4937313dec117490489404c0f549da6defb72e"
	Oct 14 19:58:59 functional-744288 kubelet[15039]: E1014 19:58:59.643143   15039 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 14 19:58:59 functional-744288 kubelet[15039]:         container kube-apiserver start failed in pod kube-apiserver-functional-744288_kube-system(5ce31098ce493b77069c880f0c6ac8e6): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 14 19:58:59 functional-744288 kubelet[15039]:  > logger="UnhandledError"
	Oct 14 19:58:59 functional-744288 kubelet[15039]: E1014 19:58:59.643181   15039 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-functional-744288" podUID="5ce31098ce493b77069c880f0c6ac8e6"
	Oct 14 19:59:01 functional-744288 kubelet[15039]: E1014 19:59:01.234434   15039 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-744288?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Oct 14 19:59:01 functional-744288 kubelet[15039]: I1014 19:59:01.389707   15039 kubelet_node_status.go:75] "Attempting to register node" node="functional-744288"
	Oct 14 19:59:01 functional-744288 kubelet[15039]: E1014 19:59:01.390137   15039 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-744288"
	Oct 14 19:59:04 functional-744288 kubelet[15039]: E1014 19:59:04.623685   15039 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-744288\" not found"
	Oct 14 19:59:05 functional-744288 kubelet[15039]: E1014 19:59:05.375495   15039 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8441/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-744288.186e73b01ddb1340  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-744288,UID:functional-744288,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-744288 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-744288,},FirstTimestamp:2025-10-14 19:55:04.600777536 +0000 UTC m=+0.555327311,LastTimestamp:2025-10-14 19:55:04.600777536 +0000 UTC m=+0.555327311,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-744288,}"
	Oct 14 19:59:05 functional-744288 kubelet[15039]: E1014 19:59:05.963951   15039 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	Oct 14 19:59:06 functional-744288 kubelet[15039]: E1014 19:59:06.610362   15039 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-744288\" not found" node="functional-744288"
	Oct 14 19:59:06 functional-744288 kubelet[15039]: E1014 19:59:06.639866   15039 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 14 19:59:06 functional-744288 kubelet[15039]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 14 19:59:06 functional-744288 kubelet[15039]:  > podSandboxID="6db547a209d52d0398507b1da96eecbcd999edc615f9bed4939047b6f878db45"
	Oct 14 19:59:06 functional-744288 kubelet[15039]: E1014 19:59:06.640022   15039 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 14 19:59:06 functional-744288 kubelet[15039]:         container kube-scheduler start failed in pod kube-scheduler-functional-744288_kube-system(e9679524bf37cc2b727411d0e5a93bfe): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 14 19:59:06 functional-744288 kubelet[15039]:  > logger="UnhandledError"
	Oct 14 19:59:06 functional-744288 kubelet[15039]: E1014 19:59:06.640064   15039 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-functional-744288" podUID="e9679524bf37cc2b727411d0e5a93bfe"
	Oct 14 19:59:08 functional-744288 kubelet[15039]: E1014 19:59:08.235421   15039 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-744288?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Oct 14 19:59:08 functional-744288 kubelet[15039]: I1014 19:59:08.391351   15039 kubelet_node_status.go:75] "Attempting to register node" node="functional-744288"
	Oct 14 19:59:08 functional-744288 kubelet[15039]: E1014 19:59:08.391917   15039 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-744288"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-744288 -n functional-744288
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-744288 -n functional-744288: exit status 2 (315.538509ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-744288" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/ComponentHealth (1.96s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-744288 apply -f testdata/invalidsvc.yaml
functional_test.go:2326: (dbg) Non-zero exit: kubectl --context functional-744288 apply -f testdata/invalidsvc.yaml: exit status 1 (51.176157ms)

                                                
                                                
** stderr ** 
	error: error validating "testdata/invalidsvc.yaml": error validating data: failed to download openapi: Get "https://192.168.49.2:8441/openapi/v2?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test.go:2328: kubectl --context functional-744288 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctional/serial/InvalidService (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (1.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-744288 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-744288 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-744288 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-744288 --alsologtostderr -v=1] stderr:
I1014 19:59:16.160146  461346 out.go:360] Setting OutFile to fd 1 ...
I1014 19:59:16.160294  461346 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1014 19:59:16.160305  461346 out.go:374] Setting ErrFile to fd 2...
I1014 19:59:16.160311  461346 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1014 19:59:16.160545  461346 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-413763/.minikube/bin
I1014 19:59:16.160912  461346 mustload.go:65] Loading cluster: functional-744288
I1014 19:59:16.161282  461346 config.go:182] Loaded profile config "functional-744288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1014 19:59:16.161724  461346 cli_runner.go:164] Run: docker container inspect functional-744288 --format={{.State.Status}}
I1014 19:59:16.180116  461346 host.go:66] Checking if "functional-744288" exists ...
I1014 19:59:16.180398  461346 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1014 19:59:16.241652  461346 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-14 19:59:16.231837835 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I1014 19:59:16.241788  461346 api_server.go:166] Checking apiserver status ...
I1014 19:59:16.241839  461346 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1014 19:59:16.241875  461346 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-744288
I1014 19:59:16.262052  461346 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/functional-744288/id_rsa Username:docker}
W1014 19:59:16.371249  461346 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1014 19:59:16.375812  461346 out.go:179] * The control-plane node functional-744288 apiserver is not running: (state=Stopped)
I1014 19:59:16.377578  461346 out.go:179]   To start a cluster, run: "minikube start -p functional-744288"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-744288
helpers_test.go:243: (dbg) docker inspect functional-744288:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ca55a7aa3ac1a055c5c96669019f558795a313505e680734901ed4465642a23a",
	        "Created": "2025-10-14T19:32:11.700856501Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 432350,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-14T19:32:11.736879267Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/ca55a7aa3ac1a055c5c96669019f558795a313505e680734901ed4465642a23a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ca55a7aa3ac1a055c5c96669019f558795a313505e680734901ed4465642a23a/hostname",
	        "HostsPath": "/var/lib/docker/containers/ca55a7aa3ac1a055c5c96669019f558795a313505e680734901ed4465642a23a/hosts",
	        "LogPath": "/var/lib/docker/containers/ca55a7aa3ac1a055c5c96669019f558795a313505e680734901ed4465642a23a/ca55a7aa3ac1a055c5c96669019f558795a313505e680734901ed4465642a23a-json.log",
	        "Name": "/functional-744288",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-744288:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-744288",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ca55a7aa3ac1a055c5c96669019f558795a313505e680734901ed4465642a23a",
	                "LowerDir": "/var/lib/docker/overlay2/7474e2653fad8f96d0b2edb2947dba52f432e8f6324d88224718eafd377e5e8b-init/diff:/var/lib/docker/overlay2/51cca15559205c73df9571f03495ed8a29f085405673af5fef2c1ba7c695d8af/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7474e2653fad8f96d0b2edb2947dba52f432e8f6324d88224718eafd377e5e8b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7474e2653fad8f96d0b2edb2947dba52f432e8f6324d88224718eafd377e5e8b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7474e2653fad8f96d0b2edb2947dba52f432e8f6324d88224718eafd377e5e8b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-744288",
	                "Source": "/var/lib/docker/volumes/functional-744288/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-744288",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-744288",
	                "name.minikube.sigs.k8s.io": "functional-744288",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "697bcb3f8caffcc21274484944886a08e42751728789c447339536a0c178eab6",
	            "SandboxKey": "/var/run/docker/netns/697bcb3f8caf",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32898"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32899"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32902"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32900"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32901"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-744288": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b6:cd:04:2d:45:3c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a03a168f8787c5e4cc68b4fb2601645260a265eddce7ef101976f25cd32bdabb",
	                    "EndpointID": "c7c49faacc3b14ce4dd3f84ad70993167a6a8b1490739ae12d7ca1980d176ee7",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-744288",
	                        "ca55a7aa3ac1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-744288 -n functional-744288
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-744288 -n functional-744288: exit status 2 (340.385558ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-744288 logs -n 25
helpers_test.go:260: TestFunctional/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                            ARGS                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ config    │ functional-744288 config get cpus                                                                                          │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │                     │
	│ config    │ functional-744288 config set cpus 2                                                                                        │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ config    │ functional-744288 config get cpus                                                                                          │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ config    │ functional-744288 config unset cpus                                                                                        │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ ssh       │ functional-744288 ssh -n functional-744288 sudo cat /home/docker/cp-test.txt                                               │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ config    │ functional-744288 config get cpus                                                                                          │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │                     │
	│ service   │ functional-744288 service list -o json                                                                                     │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │                     │
	│ ssh       │ functional-744288 ssh echo hello                                                                                           │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ cp        │ functional-744288 cp functional-744288:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3238097529/001/cp-test.txt │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ service   │ functional-744288 service --namespace=default --https --url hello-node                                                     │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │                     │
	│ ssh       │ functional-744288 ssh cat /etc/hostname                                                                                    │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ ssh       │ functional-744288 ssh -n functional-744288 sudo cat /home/docker/cp-test.txt                                               │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ service   │ functional-744288 service hello-node --url --format={{.IP}}                                                                │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │                     │
	│ tunnel    │ functional-744288 tunnel --alsologtostderr                                                                                 │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │                     │
	│ tunnel    │ functional-744288 tunnel --alsologtostderr                                                                                 │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │                     │
	│ service   │ functional-744288 service hello-node --url                                                                                 │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │                     │
	│ cp        │ functional-744288 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                  │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ tunnel    │ functional-744288 tunnel --alsologtostderr                                                                                 │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │                     │
	│ ssh       │ functional-744288 ssh -n functional-744288 sudo cat /tmp/does/not/exist/cp-test.txt                                        │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ addons    │ functional-744288 addons list                                                                                              │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ addons    │ functional-744288 addons list -o json                                                                                      │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ start     │ -p functional-744288 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio                  │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │                     │
	│ start     │ -p functional-744288 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                            │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │                     │
	│ start     │ -p functional-744288 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio                  │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-744288 --alsologtostderr -v=1                                                             │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │                     │
	└───────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/14 19:59:15
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1014 19:59:15.984416  461207 out.go:360] Setting OutFile to fd 1 ...
	I1014 19:59:15.984586  461207 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 19:59:15.984597  461207 out.go:374] Setting ErrFile to fd 2...
	I1014 19:59:15.984604  461207 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 19:59:15.985010  461207 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-413763/.minikube/bin
	I1014 19:59:15.985511  461207 out.go:368] Setting JSON to false
	I1014 19:59:15.986502  461207 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":9702,"bootTime":1760462254,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1014 19:59:15.986600  461207 start.go:141] virtualization: kvm guest
	I1014 19:59:15.988840  461207 out.go:179] * [functional-744288] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1014 19:59:15.990551  461207 out.go:179]   - MINIKUBE_LOCATION=21409
	I1014 19:59:15.990567  461207 notify.go:220] Checking for updates...
	I1014 19:59:15.993365  461207 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 19:59:15.994948  461207 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-413763/kubeconfig
	I1014 19:59:15.997169  461207 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-413763/.minikube
	I1014 19:59:15.999150  461207 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1014 19:59:16.000873  461207 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 19:59:16.003345  461207 config.go:182] Loaded profile config "functional-744288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 19:59:16.004102  461207 driver.go:421] Setting default libvirt URI to qemu:///system
	I1014 19:59:16.029353  461207 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1014 19:59:16.029472  461207 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 19:59:16.097661  461207 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-14 19:59:16.086601927 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1014 19:59:16.097897  461207 docker.go:318] overlay module found
	I1014 19:59:16.099803  461207 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1014 19:59:16.101025  461207 start.go:305] selected driver: docker
	I1014 19:59:16.101045  461207 start.go:925] validating driver "docker" against &{Name:functional-744288 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-744288 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 19:59:16.101172  461207 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 19:59:16.103591  461207 out.go:203] 
	W1014 19:59:16.105109  461207 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1014 19:59:16.106244  461207 out.go:203] 
	
	
	==> CRI-O <==
	Oct 14 19:59:09 functional-744288 crio[5849]: time="2025-10-14T19:59:09.647391921Z" level=info msg="createCtr: removing container 6e2c2c4cb04a0ff330473aae999924576003bb30cc6d310e8d22ce70f7fdc315" id=0e5c004a-e99a-4ca5-82e4-160e1832f434 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:59:09 functional-744288 crio[5849]: time="2025-10-14T19:59:09.647463494Z" level=info msg="createCtr: deleting container 6e2c2c4cb04a0ff330473aae999924576003bb30cc6d310e8d22ce70f7fdc315 from storage" id=0e5c004a-e99a-4ca5-82e4-160e1832f434 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:59:09 functional-744288 crio[5849]: time="2025-10-14T19:59:09.650679605Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-functional-744288_kube-system_07f65d41bdafe0b0f1a2009eadad0a38_0" id=0e5c004a-e99a-4ca5-82e4-160e1832f434 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:59:11 functional-744288 crio[5849]: time="2025-10-14T19:59:11.611367475Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=036c7d74-f14a-4e37-bb50-6bb0624e5a1e name=/runtime.v1.ImageService/ImageStatus
	Oct 14 19:59:11 functional-744288 crio[5849]: time="2025-10-14T19:59:11.611466309Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=76283b85-fae3-4575-a63d-e9f1083700fd name=/runtime.v1.ImageService/ImageStatus
	Oct 14 19:59:11 functional-744288 crio[5849]: time="2025-10-14T19:59:11.612473494Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=6b09af6a-8521-4074-934a-fe4637b5d212 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 19:59:11 functional-744288 crio[5849]: time="2025-10-14T19:59:11.612574787Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=17ee2120-156c-4b7c-a568-480cee735a23 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 19:59:11 functional-744288 crio[5849]: time="2025-10-14T19:59:11.613538771Z" level=info msg="Creating container: kube-system/kube-apiserver-functional-744288/kube-apiserver" id=8b002ed2-7873-41d0-8072-fb8bb25b24b7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:59:11 functional-744288 crio[5849]: time="2025-10-14T19:59:11.613632658Z" level=info msg="Creating container: kube-system/kube-controller-manager-functional-744288/kube-controller-manager" id=26c7be0c-4575-4233-bef9-55b997ad3643 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:59:11 functional-744288 crio[5849]: time="2025-10-14T19:59:11.613813713Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 19:59:11 functional-744288 crio[5849]: time="2025-10-14T19:59:11.613827311Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 19:59:11 functional-744288 crio[5849]: time="2025-10-14T19:59:11.62134908Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 19:59:11 functional-744288 crio[5849]: time="2025-10-14T19:59:11.62198193Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 19:59:11 functional-744288 crio[5849]: time="2025-10-14T19:59:11.623398407Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 19:59:11 functional-744288 crio[5849]: time="2025-10-14T19:59:11.624010925Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 19:59:11 functional-744288 crio[5849]: time="2025-10-14T19:59:11.643915388Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=8b002ed2-7873-41d0-8072-fb8bb25b24b7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:59:11 functional-744288 crio[5849]: time="2025-10-14T19:59:11.64664635Z" level=info msg="createCtr: deleting container ID 0198668cf8f9b17cdd9059614e22cfde53f0d3a3687c1f1676b902ee917ecd91 from idIndex" id=8b002ed2-7873-41d0-8072-fb8bb25b24b7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:59:11 functional-744288 crio[5849]: time="2025-10-14T19:59:11.646840424Z" level=info msg="createCtr: removing container 0198668cf8f9b17cdd9059614e22cfde53f0d3a3687c1f1676b902ee917ecd91" id=8b002ed2-7873-41d0-8072-fb8bb25b24b7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:59:11 functional-744288 crio[5849]: time="2025-10-14T19:59:11.646898107Z" level=info msg="createCtr: deleting container 0198668cf8f9b17cdd9059614e22cfde53f0d3a3687c1f1676b902ee917ecd91 from storage" id=8b002ed2-7873-41d0-8072-fb8bb25b24b7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:59:11 functional-744288 crio[5849]: time="2025-10-14T19:59:11.64748926Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=26c7be0c-4575-4233-bef9-55b997ad3643 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:59:11 functional-744288 crio[5849]: time="2025-10-14T19:59:11.649668552Z" level=info msg="createCtr: deleting container ID d97313d22756522c38c8755736a8aca9ed9ebf892661c6746b1f01eb12c01ba2 from idIndex" id=26c7be0c-4575-4233-bef9-55b997ad3643 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:59:11 functional-744288 crio[5849]: time="2025-10-14T19:59:11.649712371Z" level=info msg="createCtr: removing container d97313d22756522c38c8755736a8aca9ed9ebf892661c6746b1f01eb12c01ba2" id=26c7be0c-4575-4233-bef9-55b997ad3643 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:59:11 functional-744288 crio[5849]: time="2025-10-14T19:59:11.649865828Z" level=info msg="createCtr: deleting container d97313d22756522c38c8755736a8aca9ed9ebf892661c6746b1f01eb12c01ba2 from storage" id=26c7be0c-4575-4233-bef9-55b997ad3643 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:59:11 functional-744288 crio[5849]: time="2025-10-14T19:59:11.652478939Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-functional-744288_kube-system_5ce31098ce493b77069c880f0c6ac8e6_0" id=8b002ed2-7873-41d0-8072-fb8bb25b24b7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:59:11 functional-744288 crio[5849]: time="2025-10-14T19:59:11.652817187Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-functional-744288_kube-system_b1fd55382fcf5a735f17d7c6c4ddad91_0" id=26c7be0c-4575-4233-bef9-55b997ad3643 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:59:17.462273   17067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:59:17.462870   17067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:59:17.463945   17067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:59:17.464540   17067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:59:17.466138   17067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 91 42 a8 46 f5 08 06
	[ +33.952077] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff aa 17 60 38 7c 63 08 06
	[  +0.961658] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ba 77 81 98 05 a1 08 06
	[  +0.043334] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 8a 2f 57 55 4c d7 08 06
	[  +4.681081] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f6 ac 51 a4 b2 5a 08 06
	[Oct14 19:04] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e6 d1 de e1 85 25 08 06
	[  +0.899784] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ae f2 fe 51 3d 95 08 06
	[  +0.040749] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 82 c6 36 89 b8 4a 08 06
	[  +4.162603] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c6 62 d5 5c 0e ef 08 06
	[ +30.801371] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 06 ae e5 28 c7 39 08 06
	[  +0.817897] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ae 5e 5e 90 62 1a 08 06
	[  +0.046959] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 96 8f 1b bd b9 9f 08 06
	[Oct14 19:05] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 56 59 c3 7c db c4 08 06
	
	
	==> kernel <==
	 19:59:17 up  2:41,  0 user,  load average: 1.19, 0.34, 1.16
	Linux functional-744288 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 14 19:59:09 functional-744288 kubelet[15039]:  > logger="UnhandledError"
	Oct 14 19:59:09 functional-744288 kubelet[15039]: E1014 19:59:09.651361   15039 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-functional-744288" podUID="07f65d41bdafe0b0f1a2009eadad0a38"
	Oct 14 19:59:10 functional-744288 kubelet[15039]: E1014 19:59:10.031269   15039 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-744288&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	Oct 14 19:59:10 functional-744288 kubelet[15039]: E1014 19:59:10.439574   15039 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://192.168.49.2:8441/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass"
	Oct 14 19:59:11 functional-744288 kubelet[15039]: E1014 19:59:11.610788   15039 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-744288\" not found" node="functional-744288"
	Oct 14 19:59:11 functional-744288 kubelet[15039]: E1014 19:59:11.610919   15039 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-744288\" not found" node="functional-744288"
	Oct 14 19:59:11 functional-744288 kubelet[15039]: E1014 19:59:11.652838   15039 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 14 19:59:11 functional-744288 kubelet[15039]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 14 19:59:11 functional-744288 kubelet[15039]:  > podSandboxID="3c54d3192ed1a94339d7aeaa1e4937313dec117490489404c0f549da6defb72e"
	Oct 14 19:59:11 functional-744288 kubelet[15039]: E1014 19:59:11.652950   15039 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 14 19:59:11 functional-744288 kubelet[15039]:         container kube-apiserver start failed in pod kube-apiserver-functional-744288_kube-system(5ce31098ce493b77069c880f0c6ac8e6): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 14 19:59:11 functional-744288 kubelet[15039]:  > logger="UnhandledError"
	Oct 14 19:59:11 functional-744288 kubelet[15039]: E1014 19:59:11.652998   15039 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-functional-744288" podUID="5ce31098ce493b77069c880f0c6ac8e6"
	Oct 14 19:59:11 functional-744288 kubelet[15039]: E1014 19:59:11.653087   15039 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 14 19:59:11 functional-744288 kubelet[15039]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 14 19:59:11 functional-744288 kubelet[15039]:  > podSandboxID="834c7ad581f3fcc6f5d04a9ecdd22e99efde1b20033a85c33ba33f7567fe39fc"
	Oct 14 19:59:11 functional-744288 kubelet[15039]: E1014 19:59:11.653125   15039 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 14 19:59:11 functional-744288 kubelet[15039]:         container kube-controller-manager start failed in pod kube-controller-manager-functional-744288_kube-system(b1fd55382fcf5a735f17d7c6c4ddad91): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 14 19:59:11 functional-744288 kubelet[15039]:  > logger="UnhandledError"
	Oct 14 19:59:11 functional-744288 kubelet[15039]: E1014 19:59:11.654224   15039 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-functional-744288" podUID="b1fd55382fcf5a735f17d7c6c4ddad91"
	Oct 14 19:59:14 functional-744288 kubelet[15039]: E1014 19:59:14.624236   15039 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-744288\" not found"
	Oct 14 19:59:15 functional-744288 kubelet[15039]: E1014 19:59:15.236841   15039 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-744288?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Oct 14 19:59:15 functional-744288 kubelet[15039]: E1014 19:59:15.377190   15039 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8441/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-744288.186e73b01ddb1340  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-744288,UID:functional-744288,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-744288 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-744288,},FirstTimestamp:2025-10-14 19:55:04.600777536 +0000 UTC m=+0.555327311,LastTimestamp:2025-10-14 19:55:04.600777536 +0000 UTC m=+0.555327311,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-744288,}"
	Oct 14 19:59:15 functional-744288 kubelet[15039]: I1014 19:59:15.393702   15039 kubelet_node_status.go:75] "Attempting to register node" node="functional-744288"
	Oct 14 19:59:15 functional-744288 kubelet[15039]: E1014 19:59:15.394167   15039 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-744288"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-744288 -n functional-744288
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-744288 -n functional-744288: exit status 2 (329.861545ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-744288" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/parallel/DashboardCmd (1.76s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (3.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-744288 status
functional_test.go:869: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-744288 status: exit status 2 (367.655959ms)

                                                
                                                
-- stdout --
	functional-744288
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Configured
	

                                                
                                                
-- /stdout --
functional_test.go:871: failed to run minikube status. args "out/minikube-linux-amd64 -p functional-744288 status" : exit status 2
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-744288 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:875: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-744288 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 2 (369.604979ms)

                                                
                                                
-- stdout --
	host:Running,kublet:Running,apiserver:Stopped,kubeconfig:Configured

                                                
                                                
-- /stdout --
functional_test.go:877: failed to run minikube status with custom format: args "out/minikube-linux-amd64 -p functional-744288 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 2
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-744288 status -o json
functional_test.go:887: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-744288 status -o json: exit status 2 (398.045325ms)

                                                
                                                
-- stdout --
	{"Name":"functional-744288","Host":"Running","Kubelet":"Running","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:889: failed to run minikube status with json output. args "out/minikube-linux-amd64 -p functional-744288 status -o json" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/StatusCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/StatusCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-744288
helpers_test.go:243: (dbg) docker inspect functional-744288:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ca55a7aa3ac1a055c5c96669019f558795a313505e680734901ed4465642a23a",
	        "Created": "2025-10-14T19:32:11.700856501Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 432350,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-14T19:32:11.736879267Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/ca55a7aa3ac1a055c5c96669019f558795a313505e680734901ed4465642a23a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ca55a7aa3ac1a055c5c96669019f558795a313505e680734901ed4465642a23a/hostname",
	        "HostsPath": "/var/lib/docker/containers/ca55a7aa3ac1a055c5c96669019f558795a313505e680734901ed4465642a23a/hosts",
	        "LogPath": "/var/lib/docker/containers/ca55a7aa3ac1a055c5c96669019f558795a313505e680734901ed4465642a23a/ca55a7aa3ac1a055c5c96669019f558795a313505e680734901ed4465642a23a-json.log",
	        "Name": "/functional-744288",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-744288:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-744288",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ca55a7aa3ac1a055c5c96669019f558795a313505e680734901ed4465642a23a",
	                "LowerDir": "/var/lib/docker/overlay2/7474e2653fad8f96d0b2edb2947dba52f432e8f6324d88224718eafd377e5e8b-init/diff:/var/lib/docker/overlay2/51cca15559205c73df9571f03495ed8a29f085405673af5fef2c1ba7c695d8af/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7474e2653fad8f96d0b2edb2947dba52f432e8f6324d88224718eafd377e5e8b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7474e2653fad8f96d0b2edb2947dba52f432e8f6324d88224718eafd377e5e8b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7474e2653fad8f96d0b2edb2947dba52f432e8f6324d88224718eafd377e5e8b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-744288",
	                "Source": "/var/lib/docker/volumes/functional-744288/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-744288",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-744288",
	                "name.minikube.sigs.k8s.io": "functional-744288",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "697bcb3f8caffcc21274484944886a08e42751728789c447339536a0c178eab6",
	            "SandboxKey": "/var/run/docker/netns/697bcb3f8caf",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32898"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32899"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32902"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32900"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32901"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-744288": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b6:cd:04:2d:45:3c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a03a168f8787c5e4cc68b4fb2601645260a265eddce7ef101976f25cd32bdabb",
	                    "EndpointID": "c7c49faacc3b14ce4dd3f84ad70993167a6a8b1490739ae12d7ca1980d176ee7",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-744288",
	                        "ca55a7aa3ac1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-744288 -n functional-744288
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-744288 -n functional-744288: exit status 2 (425.190643ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctional/parallel/StatusCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/StatusCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-744288 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-744288 logs -n 25: (1.11295128s)
helpers_test.go:260: TestFunctional/parallel/StatusCmd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                            ARGS                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                           │ minikube          │ jenkins │ v1.37.0 │ 14 Oct 25 19:46 UTC │ 14 Oct 25 19:46 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 14 Oct 25 19:46 UTC │ 14 Oct 25 19:46 UTC │
	│ kubectl │ functional-744288 kubectl -- --context functional-744288 get pods                                                          │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:46 UTC │                     │
	│ start   │ -p functional-744288 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                   │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:46 UTC │                     │
	│ cp      │ functional-744288 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                         │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ config  │ functional-744288 config unset cpus                                                                                        │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ service │ functional-744288 service list                                                                                             │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │                     │
	│ config  │ functional-744288 config get cpus                                                                                          │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │                     │
	│ config  │ functional-744288 config set cpus 2                                                                                        │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ config  │ functional-744288 config get cpus                                                                                          │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ config  │ functional-744288 config unset cpus                                                                                        │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ ssh     │ functional-744288 ssh -n functional-744288 sudo cat /home/docker/cp-test.txt                                               │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ config  │ functional-744288 config get cpus                                                                                          │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │                     │
	│ service │ functional-744288 service list -o json                                                                                     │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │                     │
	│ ssh     │ functional-744288 ssh echo hello                                                                                           │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ cp      │ functional-744288 cp functional-744288:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3238097529/001/cp-test.txt │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ service │ functional-744288 service --namespace=default --https --url hello-node                                                     │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │                     │
	│ ssh     │ functional-744288 ssh cat /etc/hostname                                                                                    │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ ssh     │ functional-744288 ssh -n functional-744288 sudo cat /home/docker/cp-test.txt                                               │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ service │ functional-744288 service hello-node --url --format={{.IP}}                                                                │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │                     │
	│ tunnel  │ functional-744288 tunnel --alsologtostderr                                                                                 │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │                     │
	│ tunnel  │ functional-744288 tunnel --alsologtostderr                                                                                 │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │                     │
	│ service │ functional-744288 service hello-node --url                                                                                 │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │                     │
	│ cp      │ functional-744288 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                  │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │                     │
	│ tunnel  │ functional-744288 tunnel --alsologtostderr                                                                                 │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/14 19:46:50
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1014 19:46:50.499742  443658 out.go:360] Setting OutFile to fd 1 ...
	I1014 19:46:50.500016  443658 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 19:46:50.500020  443658 out.go:374] Setting ErrFile to fd 2...
	I1014 19:46:50.500023  443658 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 19:46:50.500243  443658 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-413763/.minikube/bin
	I1014 19:46:50.500711  443658 out.go:368] Setting JSON to false
	I1014 19:46:50.501776  443658 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":8957,"bootTime":1760462254,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1014 19:46:50.501876  443658 start.go:141] virtualization: kvm guest
	I1014 19:46:50.504465  443658 out.go:179] * [functional-744288] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1014 19:46:50.505861  443658 out.go:179]   - MINIKUBE_LOCATION=21409
	I1014 19:46:50.505882  443658 notify.go:220] Checking for updates...
	I1014 19:46:50.508327  443658 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 19:46:50.509750  443658 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-413763/kubeconfig
	I1014 19:46:50.510866  443658 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-413763/.minikube
	I1014 19:46:50.511854  443658 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1014 19:46:50.512854  443658 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 19:46:50.514315  443658 config.go:182] Loaded profile config "functional-744288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 19:46:50.514426  443658 driver.go:421] Setting default libvirt URI to qemu:///system
	I1014 19:46:50.538310  443658 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1014 19:46:50.538445  443658 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 19:46:50.601114  443658 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:false NGoroutines:58 SystemTime:2025-10-14 19:46:50.588718622 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1014 19:46:50.601209  443658 docker.go:318] overlay module found
	I1014 19:46:50.603086  443658 out.go:179] * Using the docker driver based on existing profile
	I1014 19:46:50.604379  443658 start.go:305] selected driver: docker
	I1014 19:46:50.604388  443658 start.go:925] validating driver "docker" against &{Name:functional-744288 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-744288 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 19:46:50.604469  443658 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 19:46:50.604549  443658 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 19:46:50.666156  443658 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:false NGoroutines:58 SystemTime:2025-10-14 19:46:50.655387801 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1014 19:46:50.666705  443658 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 19:46:50.666723  443658 cni.go:84] Creating CNI manager for ""
	I1014 19:46:50.666779  443658 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1014 19:46:50.666824  443658 start.go:349] cluster config:
	{Name:functional-744288 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-744288 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 19:46:50.668890  443658 out.go:179] * Starting "functional-744288" primary control-plane node in "functional-744288" cluster
	I1014 19:46:50.670269  443658 cache.go:123] Beginning downloading kic base image for docker with crio
	I1014 19:46:50.671700  443658 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1014 19:46:50.672853  443658 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 19:46:50.672887  443658 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-413763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1014 19:46:50.672894  443658 cache.go:58] Caching tarball of preloaded images
	I1014 19:46:50.672978  443658 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1014 19:46:50.672993  443658 preload.go:233] Found /home/jenkins/minikube-integration/21409-413763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1014 19:46:50.673002  443658 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1014 19:46:50.673099  443658 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/config.json ...
	I1014 19:46:50.694236  443658 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1014 19:46:50.694247  443658 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1014 19:46:50.694262  443658 cache.go:232] Successfully downloaded all kic artifacts
	I1014 19:46:50.694285  443658 start.go:360] acquireMachinesLock for functional-744288: {Name:mk27c3a9a4edec1c99a109c410361619ff35ec14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 19:46:50.694339  443658 start.go:364] duration metric: took 40.961µs to acquireMachinesLock for "functional-744288"
	I1014 19:46:50.694355  443658 start.go:96] Skipping create...Using existing machine configuration
	I1014 19:46:50.694359  443658 fix.go:54] fixHost starting: 
	I1014 19:46:50.694551  443658 cli_runner.go:164] Run: docker container inspect functional-744288 --format={{.State.Status}}
	I1014 19:46:50.713829  443658 fix.go:112] recreateIfNeeded on functional-744288: state=Running err=<nil>
	W1014 19:46:50.713852  443658 fix.go:138] unexpected machine state, will restart: <nil>
	I1014 19:46:50.716011  443658 out.go:252] * Updating the running docker "functional-744288" container ...
	I1014 19:46:50.716063  443658 machine.go:93] provisionDockerMachine start ...
	I1014 19:46:50.716145  443658 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-744288
	I1014 19:46:50.734693  443658 main.go:141] libmachine: Using SSH client type: native
	I1014 19:46:50.734948  443658 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32898 <nil> <nil>}
	I1014 19:46:50.734956  443658 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 19:46:50.881904  443658 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-744288
	
	I1014 19:46:50.881928  443658 ubuntu.go:182] provisioning hostname "functional-744288"
	I1014 19:46:50.882024  443658 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-744288
	I1014 19:46:50.900923  443658 main.go:141] libmachine: Using SSH client type: native
	I1014 19:46:50.901187  443658 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32898 <nil> <nil>}
	I1014 19:46:50.901202  443658 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-744288 && echo "functional-744288" | sudo tee /etc/hostname
	I1014 19:46:51.056989  443658 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-744288
	
	I1014 19:46:51.057085  443658 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-744288
	I1014 19:46:51.074806  443658 main.go:141] libmachine: Using SSH client type: native
	I1014 19:46:51.075019  443658 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32898 <nil> <nil>}
	I1014 19:46:51.075030  443658 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-744288' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-744288/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-744288' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 19:46:51.221854  443658 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 19:46:51.221878  443658 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-413763/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-413763/.minikube}
	I1014 19:46:51.221910  443658 ubuntu.go:190] setting up certificates
	I1014 19:46:51.221952  443658 provision.go:84] configureAuth start
	I1014 19:46:51.222015  443658 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-744288
	I1014 19:46:51.240005  443658 provision.go:143] copyHostCerts
	I1014 19:46:51.240069  443658 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem, removing ...
	I1014 19:46:51.240090  443658 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem
	I1014 19:46:51.240177  443658 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem (1078 bytes)
	I1014 19:46:51.240322  443658 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem, removing ...
	I1014 19:46:51.240330  443658 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem
	I1014 19:46:51.240371  443658 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem (1123 bytes)
	I1014 19:46:51.240443  443658 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem, removing ...
	I1014 19:46:51.240447  443658 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem
	I1014 19:46:51.240478  443658 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem (1675 bytes)
	I1014 19:46:51.240545  443658 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca-key.pem org=jenkins.functional-744288 san=[127.0.0.1 192.168.49.2 functional-744288 localhost minikube]
	I1014 19:46:51.277418  443658 provision.go:177] copyRemoteCerts
	I1014 19:46:51.277469  443658 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 19:46:51.277512  443658 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-744288
	I1014 19:46:51.295935  443658 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/functional-744288/id_rsa Username:docker}
	I1014 19:46:51.399940  443658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 19:46:51.419014  443658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1014 19:46:51.436411  443658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1014 19:46:51.453971  443658 provision.go:87] duration metric: took 232.002826ms to configureAuth
	I1014 19:46:51.453999  443658 ubuntu.go:206] setting minikube options for container-runtime
	I1014 19:46:51.454155  443658 config.go:182] Loaded profile config "functional-744288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 19:46:51.454253  443658 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-744288
	I1014 19:46:51.471667  443658 main.go:141] libmachine: Using SSH client type: native
	I1014 19:46:51.471917  443658 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32898 <nil> <nil>}
	I1014 19:46:51.471928  443658 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 19:46:51.753714  443658 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 19:46:51.753736  443658 machine.go:96] duration metric: took 1.037666418s to provisionDockerMachine
	I1014 19:46:51.753750  443658 start.go:293] postStartSetup for "functional-744288" (driver="docker")
	I1014 19:46:51.753791  443658 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 19:46:51.753870  443658 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 19:46:51.753924  443658 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-744288
	I1014 19:46:51.771894  443658 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/functional-744288/id_rsa Username:docker}
	I1014 19:46:51.875275  443658 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 19:46:51.879014  443658 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1014 19:46:51.879036  443658 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1014 19:46:51.879053  443658 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-413763/.minikube/addons for local assets ...
	I1014 19:46:51.879110  443658 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-413763/.minikube/files for local assets ...
	I1014 19:46:51.879192  443658 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem -> 4173732.pem in /etc/ssl/certs
	I1014 19:46:51.879264  443658 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/test/nested/copy/417373/hosts -> hosts in /etc/test/nested/copy/417373
	I1014 19:46:51.879295  443658 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/417373
	I1014 19:46:51.887031  443658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem --> /etc/ssl/certs/4173732.pem (1708 bytes)
	I1014 19:46:51.905744  443658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/test/nested/copy/417373/hosts --> /etc/test/nested/copy/417373/hosts (40 bytes)
	I1014 19:46:51.923826  443658 start.go:296] duration metric: took 170.03666ms for postStartSetup
	I1014 19:46:51.923911  443658 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1014 19:46:51.923959  443658 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-744288
	I1014 19:46:51.942362  443658 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/functional-744288/id_rsa Username:docker}
	I1014 19:46:52.043778  443658 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1014 19:46:52.048837  443658 fix.go:56] duration metric: took 1.354467438s for fixHost
	I1014 19:46:52.048860  443658 start.go:83] releasing machines lock for "functional-744288", held for 1.354513179s
	I1014 19:46:52.048940  443658 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-744288
	I1014 19:46:52.067069  443658 ssh_runner.go:195] Run: cat /version.json
	I1014 19:46:52.067102  443658 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 19:46:52.067120  443658 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-744288
	I1014 19:46:52.067171  443658 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-744288
	I1014 19:46:52.086721  443658 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/functional-744288/id_rsa Username:docker}
	I1014 19:46:52.087447  443658 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/functional-744288/id_rsa Username:docker}
	I1014 19:46:52.242329  443658 ssh_runner.go:195] Run: systemctl --version
	I1014 19:46:52.249118  443658 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 19:46:52.286245  443658 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 19:46:52.291299  443658 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 19:46:52.291349  443658 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 19:46:52.300635  443658 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1014 19:46:52.300652  443658 start.go:495] detecting cgroup driver to use...
	I1014 19:46:52.300686  443658 detect.go:190] detected "systemd" cgroup driver on host os
	I1014 19:46:52.300736  443658 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 19:46:52.316275  443658 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 19:46:52.329801  443658 docker.go:218] disabling cri-docker service (if available) ...
	I1014 19:46:52.329853  443658 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 19:46:52.346243  443658 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 19:46:52.359490  443658 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 19:46:52.447197  443658 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 19:46:52.538861  443658 docker.go:234] disabling docker service ...
	I1014 19:46:52.538916  443658 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 19:46:52.553930  443658 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 19:46:52.567369  443658 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 19:46:52.660956  443658 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 19:46:52.750890  443658 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 19:46:52.763838  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 19:46:52.778079  443658 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1014 19:46:52.778155  443658 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 19:46:52.787486  443658 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1014 19:46:52.787547  443658 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 19:46:52.796683  443658 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 19:46:52.805576  443658 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 19:46:52.814550  443658 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 19:46:52.822996  443658 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 19:46:52.831895  443658 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 19:46:52.840774  443658 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 19:46:52.850651  443658 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 19:46:52.859313  443658 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 19:46:52.867538  443658 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 19:46:52.962127  443658 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 19:46:53.076386  443658 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 19:46:53.076443  443658 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 19:46:53.080594  443658 start.go:563] Will wait 60s for crictl version
	I1014 19:46:53.080668  443658 ssh_runner.go:195] Run: which crictl
	I1014 19:46:53.084304  443658 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1014 19:46:53.109208  443658 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1014 19:46:53.109281  443658 ssh_runner.go:195] Run: crio --version
	I1014 19:46:53.138035  443658 ssh_runner.go:195] Run: crio --version
	I1014 19:46:53.168844  443658 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1014 19:46:53.170307  443658 cli_runner.go:164] Run: docker network inspect functional-744288 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1014 19:46:53.187885  443658 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1014 19:46:53.194070  443658 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1014 19:46:53.195672  443658 kubeadm.go:883] updating cluster {Name:functional-744288 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-744288 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 19:46:53.195871  443658 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 19:46:53.195945  443658 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 19:46:53.228563  443658 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 19:46:53.228574  443658 crio.go:433] Images already preloaded, skipping extraction
	I1014 19:46:53.228622  443658 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 19:46:53.254361  443658 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 19:46:53.254375  443658 cache_images.go:85] Images are preloaded, skipping loading
	I1014 19:46:53.254381  443658 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1014 19:46:53.254470  443658 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-744288 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-744288 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 19:46:53.254527  443658 ssh_runner.go:195] Run: crio config
	I1014 19:46:53.300404  443658 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1014 19:46:53.300426  443658 cni.go:84] Creating CNI manager for ""
	I1014 19:46:53.300433  443658 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1014 19:46:53.300444  443658 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1014 19:46:53.300495  443658 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-744288 NodeName:functional-744288 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map
[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1014 19:46:53.300616  443658 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-744288"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 19:46:53.300679  443658 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1014 19:46:53.309514  443658 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 19:46:53.309583  443658 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1014 19:46:53.317487  443658 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1014 19:46:53.330167  443658 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 19:46:53.343013  443658 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2063 bytes)
	I1014 19:46:53.355344  443658 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1014 19:46:53.359037  443658 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 19:46:53.444644  443658 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 19:46:53.458036  443658 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288 for IP: 192.168.49.2
	I1014 19:46:53.458048  443658 certs.go:195] generating shared ca certs ...
	I1014 19:46:53.458069  443658 certs.go:227] acquiring lock for ca certs: {Name:mk332cc83f1e1747534fc81e896bed94f24941d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 19:46:53.458227  443658 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.key
	I1014 19:46:53.458260  443658 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.key
	I1014 19:46:53.458267  443658 certs.go:257] generating profile certs ...
	I1014 19:46:53.458335  443658 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/client.key
	I1014 19:46:53.458371  443658 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/apiserver.key.d065d9e2
	I1014 19:46:53.458404  443658 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/proxy-client.key
	I1014 19:46:53.458496  443658 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/417373.pem (1338 bytes)
	W1014 19:46:53.458520  443658 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-413763/.minikube/certs/417373_empty.pem, impossibly tiny 0 bytes
	I1014 19:46:53.458525  443658 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca-key.pem (1675 bytes)
	I1014 19:46:53.458546  443658 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem (1078 bytes)
	I1014 19:46:53.458563  443658 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem (1123 bytes)
	I1014 19:46:53.458578  443658 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem (1675 bytes)
	I1014 19:46:53.458610  443658 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem (1708 bytes)
	I1014 19:46:53.459307  443658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 19:46:53.477414  443658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1014 19:46:53.495270  443658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 19:46:53.512555  443658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1014 19:46:53.529773  443658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1014 19:46:53.546789  443658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1014 19:46:53.564254  443658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 19:46:53.581817  443658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1014 19:46:53.599895  443658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 19:46:53.617446  443658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/certs/417373.pem --> /usr/share/ca-certificates/417373.pem (1338 bytes)
	I1014 19:46:53.635253  443658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem --> /usr/share/ca-certificates/4173732.pem (1708 bytes)
	I1014 19:46:53.652640  443658 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 19:46:53.665679  443658 ssh_runner.go:195] Run: openssl version
	I1014 19:46:53.672008  443658 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 19:46:53.680614  443658 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 19:46:53.684470  443658 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 19:15 /usr/share/ca-certificates/minikubeCA.pem
	I1014 19:46:53.684516  443658 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 19:46:53.719901  443658 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 19:46:53.728850  443658 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/417373.pem && ln -fs /usr/share/ca-certificates/417373.pem /etc/ssl/certs/417373.pem"
	I1014 19:46:53.737556  443658 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/417373.pem
	I1014 19:46:53.741417  443658 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 19:32 /usr/share/ca-certificates/417373.pem
	I1014 19:46:53.741461  443658 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/417373.pem
	I1014 19:46:53.776307  443658 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/417373.pem /etc/ssl/certs/51391683.0"
	I1014 19:46:53.785236  443658 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4173732.pem && ln -fs /usr/share/ca-certificates/4173732.pem /etc/ssl/certs/4173732.pem"
	I1014 19:46:53.794084  443658 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4173732.pem
	I1014 19:46:53.797892  443658 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 19:32 /usr/share/ca-certificates/4173732.pem
	I1014 19:46:53.797948  443658 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4173732.pem
	I1014 19:46:53.834593  443658 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4173732.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 19:46:53.844414  443658 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 19:46:53.848749  443658 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1014 19:46:53.887194  443658 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1014 19:46:53.922606  443658 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1014 19:46:53.957478  443658 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1014 19:46:53.992284  443658 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1014 19:46:54.027831  443658 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1014 19:46:54.062500  443658 kubeadm.go:400] StartCluster: {Name:functional-744288 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-744288 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 19:46:54.062581  443658 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 19:46:54.062679  443658 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 19:46:54.091036  443658 cri.go:89] found id: ""
	I1014 19:46:54.091100  443658 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 19:46:54.099853  443658 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1014 19:46:54.099866  443658 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1014 19:46:54.099936  443658 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1014 19:46:54.108263  443658 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1014 19:46:54.108959  443658 kubeconfig.go:125] found "functional-744288" server: "https://192.168.49.2:8441"
	I1014 19:46:54.110744  443658 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1014 19:46:54.119142  443658 kubeadm.go:644] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-10-14 19:32:19.540090301 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-10-14 19:46:53.353553179 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1014 19:46:54.119152  443658 kubeadm.go:1160] stopping kube-system containers ...
	I1014 19:46:54.119166  443658 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1014 19:46:54.119218  443658 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 19:46:54.148301  443658 cri.go:89] found id: ""
	I1014 19:46:54.148360  443658 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1014 19:46:54.184714  443658 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 19:46:54.193363  443658 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5631 Oct 14 19:36 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5640 Oct 14 19:36 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5676 Oct 14 19:36 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Oct 14 19:36 /etc/kubernetes/scheduler.conf
	
	I1014 19:46:54.193426  443658 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1014 19:46:54.201562  443658 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1014 19:46:54.209606  443658 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1014 19:46:54.209663  443658 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 19:46:54.217395  443658 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1014 19:46:54.225064  443658 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1014 19:46:54.225124  443658 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 19:46:54.232906  443658 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1014 19:46:54.240872  443658 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1014 19:46:54.240946  443658 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 19:46:54.249061  443658 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 19:46:54.257286  443658 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 19:46:54.300108  443658 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 19:46:55.343385  443658 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.043246412s)
	I1014 19:46:55.343447  443658 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1014 19:46:55.525076  443658 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 19:46:55.576109  443658 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1014 19:46:55.627520  443658 api_server.go:52] waiting for apiserver process to appear ...
	I1014 19:46:55.627605  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:46:56.127985  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:46:56.627838  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:46:57.127896  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:46:57.627665  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:46:58.127984  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:46:58.627867  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:46:59.127900  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:46:59.628123  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:00.128625  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:00.627821  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:01.128624  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:01.628023  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:02.127948  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:02.627921  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:03.127948  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:03.628734  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:04.128392  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:04.628537  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:05.128064  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:05.628802  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:06.128694  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:06.628003  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:07.128400  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:07.628401  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:08.127838  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:08.628730  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:09.128120  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:09.628353  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:10.128434  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:10.628596  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:11.128581  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:11.627793  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:12.127961  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:12.628351  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:13.128116  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:13.627994  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:14.128426  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:14.628582  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:15.127702  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:15.628620  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:16.128507  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:16.628503  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:17.128107  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:17.628228  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:18.128362  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:18.628356  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:19.127920  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:19.628163  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:20.128061  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:20.628781  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:21.127881  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:21.628577  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:22.128659  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:22.628134  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:23.128128  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:23.627880  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:24.128119  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:24.627778  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:25.127863  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:25.628390  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:26.127929  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:26.627912  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:27.128042  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:27.628342  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:28.128494  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:28.628349  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:29.128156  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:29.628040  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:30.127990  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:30.627843  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:31.128015  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:31.627940  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:32.127940  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:32.628112  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:33.127960  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:33.627881  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:34.128093  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:34.628548  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:35.128447  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:35.628084  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:36.128068  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:36.628232  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:37.127674  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:37.627888  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:38.127934  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:38.627918  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:39.127805  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:39.628511  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:40.127885  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:40.628201  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:41.128746  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:41.627723  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:42.127816  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:42.628553  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:43.128336  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:43.628428  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:44.128606  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:44.628579  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:45.128728  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:45.628365  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:46.127990  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:46.628044  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:47.127727  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:47.628173  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:48.128160  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:48.627943  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:49.128276  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:49.628454  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:50.127829  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:50.628280  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:51.127982  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:51.628287  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:52.128593  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:52.627776  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:53.127784  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:53.628593  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:54.127690  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:54.627941  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:55.128160  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:55.628161  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:47:55.628261  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:47:55.656679  443658 cri.go:89] found id: ""
	I1014 19:47:55.656706  443658 logs.go:282] 0 containers: []
	W1014 19:47:55.656717  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:47:55.656725  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:47:55.656807  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:47:55.684574  443658 cri.go:89] found id: ""
	I1014 19:47:55.684594  443658 logs.go:282] 0 containers: []
	W1014 19:47:55.684602  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:47:55.684607  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:47:55.684669  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:47:55.711291  443658 cri.go:89] found id: ""
	I1014 19:47:55.711309  443658 logs.go:282] 0 containers: []
	W1014 19:47:55.711316  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:47:55.711321  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:47:55.711376  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:47:55.738652  443658 cri.go:89] found id: ""
	I1014 19:47:55.738669  443658 logs.go:282] 0 containers: []
	W1014 19:47:55.738678  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:47:55.738690  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:47:55.738752  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:47:55.765191  443658 cri.go:89] found id: ""
	I1014 19:47:55.765208  443658 logs.go:282] 0 containers: []
	W1014 19:47:55.765215  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:47:55.765220  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:47:55.765267  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:47:55.791406  443658 cri.go:89] found id: ""
	I1014 19:47:55.791425  443658 logs.go:282] 0 containers: []
	W1014 19:47:55.791433  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:47:55.791438  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:47:55.791483  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:47:55.817705  443658 cri.go:89] found id: ""
	I1014 19:47:55.817724  443658 logs.go:282] 0 containers: []
	W1014 19:47:55.817732  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:47:55.817741  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:47:55.817787  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:47:55.885166  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:47:55.885191  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:47:55.903388  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:47:55.903408  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:47:55.962011  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:47:55.955051    6730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:47:55.955898    6730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:47:55.957465    6730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:47:55.957907    6730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:47:55.958999    6730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:47:55.955051    6730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:47:55.955898    6730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:47:55.957465    6730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:47:55.957907    6730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:47:55.958999    6730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:47:55.962024  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:47:55.962036  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:47:56.023614  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:47:56.023639  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:47:58.556015  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:47:58.567258  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:47:58.567330  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:47:58.593588  443658 cri.go:89] found id: ""
	I1014 19:47:58.593606  443658 logs.go:282] 0 containers: []
	W1014 19:47:58.593613  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:47:58.593618  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:47:58.593686  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:47:58.621667  443658 cri.go:89] found id: ""
	I1014 19:47:58.621687  443658 logs.go:282] 0 containers: []
	W1014 19:47:58.621694  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:47:58.621699  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:47:58.621753  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:47:58.648823  443658 cri.go:89] found id: ""
	I1014 19:47:58.648841  443658 logs.go:282] 0 containers: []
	W1014 19:47:58.648851  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:47:58.648858  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:47:58.648920  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:47:58.675986  443658 cri.go:89] found id: ""
	I1014 19:47:58.676007  443658 logs.go:282] 0 containers: []
	W1014 19:47:58.676017  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:47:58.676024  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:47:58.676074  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:47:58.703476  443658 cri.go:89] found id: ""
	I1014 19:47:58.703492  443658 logs.go:282] 0 containers: []
	W1014 19:47:58.703499  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:47:58.703504  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:47:58.703553  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:47:58.732093  443658 cri.go:89] found id: ""
	I1014 19:47:58.732116  443658 logs.go:282] 0 containers: []
	W1014 19:47:58.732127  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:47:58.732133  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:47:58.732188  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:47:58.759813  443658 cri.go:89] found id: ""
	I1014 19:47:58.759832  443658 logs.go:282] 0 containers: []
	W1014 19:47:58.759839  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:47:58.759848  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:47:58.759858  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:47:58.829913  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:47:58.829936  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:47:58.848245  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:47:58.848269  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:47:58.907295  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:47:58.900510    6852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:47:58.901027    6852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:47:58.902546    6852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:47:58.903012    6852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:47:58.904214    6852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:47:58.900510    6852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:47:58.901027    6852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:47:58.902546    6852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:47:58.903012    6852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:47:58.904214    6852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:47:58.907316  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:47:58.907329  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:47:58.971553  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:47:58.971576  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:48:01.502989  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:48:01.514422  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:48:01.514481  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:48:01.541083  443658 cri.go:89] found id: ""
	I1014 19:48:01.541099  443658 logs.go:282] 0 containers: []
	W1014 19:48:01.541107  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:48:01.541113  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:48:01.541166  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:48:01.568411  443658 cri.go:89] found id: ""
	I1014 19:48:01.568430  443658 logs.go:282] 0 containers: []
	W1014 19:48:01.568438  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:48:01.568443  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:48:01.568507  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:48:01.596626  443658 cri.go:89] found id: ""
	I1014 19:48:01.596643  443658 logs.go:282] 0 containers: []
	W1014 19:48:01.596651  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:48:01.596656  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:48:01.596709  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:48:01.625098  443658 cri.go:89] found id: ""
	I1014 19:48:01.625114  443658 logs.go:282] 0 containers: []
	W1014 19:48:01.625121  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:48:01.625126  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:48:01.625175  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:48:01.652267  443658 cri.go:89] found id: ""
	I1014 19:48:01.652287  443658 logs.go:282] 0 containers: []
	W1014 19:48:01.652296  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:48:01.652302  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:48:01.652369  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:48:01.680110  443658 cri.go:89] found id: ""
	I1014 19:48:01.680126  443658 logs.go:282] 0 containers: []
	W1014 19:48:01.680132  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:48:01.680137  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:48:01.680183  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:48:01.706650  443658 cri.go:89] found id: ""
	I1014 19:48:01.706673  443658 logs.go:282] 0 containers: []
	W1014 19:48:01.706682  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:48:01.706692  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:48:01.706703  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:48:01.777579  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:48:01.777603  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:48:01.796141  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:48:01.796160  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:48:01.854657  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:48:01.848022    6974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:01.848515    6974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:01.850053    6974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:01.850582    6974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:01.851657    6974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:48:01.848022    6974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:01.848515    6974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:01.850053    6974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:01.850582    6974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:01.851657    6974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:48:01.854673  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:48:01.854688  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:48:01.921567  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:48:01.921605  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:48:04.454355  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:48:04.465748  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:48:04.465834  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:48:04.493735  443658 cri.go:89] found id: ""
	I1014 19:48:04.493752  443658 logs.go:282] 0 containers: []
	W1014 19:48:04.493773  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:48:04.493780  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:48:04.493837  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:48:04.520295  443658 cri.go:89] found id: ""
	I1014 19:48:04.520313  443658 logs.go:282] 0 containers: []
	W1014 19:48:04.520321  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:48:04.520325  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:48:04.520380  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:48:04.547856  443658 cri.go:89] found id: ""
	I1014 19:48:04.547880  443658 logs.go:282] 0 containers: []
	W1014 19:48:04.547891  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:48:04.547898  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:48:04.547963  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:48:04.574029  443658 cri.go:89] found id: ""
	I1014 19:48:04.574047  443658 logs.go:282] 0 containers: []
	W1014 19:48:04.574055  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:48:04.574059  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:48:04.574111  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:48:04.600612  443658 cri.go:89] found id: ""
	I1014 19:48:04.600635  443658 logs.go:282] 0 containers: []
	W1014 19:48:04.600643  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:48:04.600648  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:48:04.600710  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:48:04.627768  443658 cri.go:89] found id: ""
	I1014 19:48:04.627787  443658 logs.go:282] 0 containers: []
	W1014 19:48:04.627796  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:48:04.627803  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:48:04.627868  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:48:04.654609  443658 cri.go:89] found id: ""
	I1014 19:48:04.654626  443658 logs.go:282] 0 containers: []
	W1014 19:48:04.654633  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:48:04.654641  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:48:04.654666  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:48:04.723997  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:48:04.724022  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:48:04.742117  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:48:04.742138  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:48:04.800762  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:48:04.793052    7104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:04.793685    7104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:04.795214    7104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:04.795736    7104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:04.797328    7104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:48:04.793052    7104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:04.793685    7104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:04.795214    7104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:04.795736    7104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:04.797328    7104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:48:04.800782  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:48:04.800797  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:48:04.865079  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:48:04.865104  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:48:07.397466  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:48:07.409124  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:48:07.409189  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:48:07.436009  443658 cri.go:89] found id: ""
	I1014 19:48:07.436030  443658 logs.go:282] 0 containers: []
	W1014 19:48:07.436039  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:48:07.436045  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:48:07.436092  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:48:07.463450  443658 cri.go:89] found id: ""
	I1014 19:48:07.463467  443658 logs.go:282] 0 containers: []
	W1014 19:48:07.463474  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:48:07.463479  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:48:07.463538  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:48:07.489350  443658 cri.go:89] found id: ""
	I1014 19:48:07.489367  443658 logs.go:282] 0 containers: []
	W1014 19:48:07.489373  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:48:07.489379  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:48:07.489423  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:48:07.516187  443658 cri.go:89] found id: ""
	I1014 19:48:07.516205  443658 logs.go:282] 0 containers: []
	W1014 19:48:07.516212  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:48:07.516217  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:48:07.516266  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:48:07.544147  443658 cri.go:89] found id: ""
	I1014 19:48:07.544163  443658 logs.go:282] 0 containers: []
	W1014 19:48:07.544171  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:48:07.544178  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:48:07.544232  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:48:07.570956  443658 cri.go:89] found id: ""
	I1014 19:48:07.570987  443658 logs.go:282] 0 containers: []
	W1014 19:48:07.570997  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:48:07.571004  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:48:07.571055  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:48:07.599057  443658 cri.go:89] found id: ""
	I1014 19:48:07.599075  443658 logs.go:282] 0 containers: []
	W1014 19:48:07.599083  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:48:07.599091  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:48:07.599102  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:48:07.629352  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:48:07.629386  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:48:07.696795  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:48:07.696819  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:48:07.714841  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:48:07.714863  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:48:07.773003  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:48:07.765637    7253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:07.766223    7253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:07.767815    7253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:07.768258    7253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:07.769624    7253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:48:07.765637    7253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:07.766223    7253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:07.767815    7253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:07.768258    7253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:07.769624    7253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:48:07.773022  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:48:07.773036  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:48:10.338910  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:48:10.350323  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:48:10.350379  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:48:10.377858  443658 cri.go:89] found id: ""
	I1014 19:48:10.377875  443658 logs.go:282] 0 containers: []
	W1014 19:48:10.377882  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:48:10.377886  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:48:10.377938  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:48:10.404249  443658 cri.go:89] found id: ""
	I1014 19:48:10.404265  443658 logs.go:282] 0 containers: []
	W1014 19:48:10.404272  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:48:10.404277  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:48:10.404326  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:48:10.432298  443658 cri.go:89] found id: ""
	I1014 19:48:10.432315  443658 logs.go:282] 0 containers: []
	W1014 19:48:10.432322  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:48:10.432328  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:48:10.432377  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:48:10.458476  443658 cri.go:89] found id: ""
	I1014 19:48:10.458495  443658 logs.go:282] 0 containers: []
	W1014 19:48:10.458501  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:48:10.458507  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:48:10.458552  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:48:10.486998  443658 cri.go:89] found id: ""
	I1014 19:48:10.487017  443658 logs.go:282] 0 containers: []
	W1014 19:48:10.487024  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:48:10.487029  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:48:10.487075  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:48:10.514207  443658 cri.go:89] found id: ""
	I1014 19:48:10.514223  443658 logs.go:282] 0 containers: []
	W1014 19:48:10.514230  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:48:10.514235  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:48:10.514285  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:48:10.541589  443658 cri.go:89] found id: ""
	I1014 19:48:10.541604  443658 logs.go:282] 0 containers: []
	W1014 19:48:10.541610  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:48:10.541618  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:48:10.541630  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:48:10.608114  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:48:10.608140  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:48:10.627515  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:48:10.627537  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:48:10.687776  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:48:10.680118    7357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:10.680631    7357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:10.682237    7357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:10.682859    7357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:10.684410    7357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:48:10.680118    7357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:10.680631    7357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:10.682237    7357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:10.682859    7357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:10.684410    7357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:48:10.687790  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:48:10.687805  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:48:10.752090  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:48:10.752115  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:48:13.282895  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:48:13.294310  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:48:13.294364  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:48:13.321971  443658 cri.go:89] found id: ""
	I1014 19:48:13.321990  443658 logs.go:282] 0 containers: []
	W1014 19:48:13.321999  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:48:13.322005  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:48:13.322054  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:48:13.349696  443658 cri.go:89] found id: ""
	I1014 19:48:13.349717  443658 logs.go:282] 0 containers: []
	W1014 19:48:13.349727  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:48:13.349734  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:48:13.349809  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:48:13.375640  443658 cri.go:89] found id: ""
	I1014 19:48:13.375658  443658 logs.go:282] 0 containers: []
	W1014 19:48:13.375664  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:48:13.375669  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:48:13.375723  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:48:13.401774  443658 cri.go:89] found id: ""
	I1014 19:48:13.401795  443658 logs.go:282] 0 containers: []
	W1014 19:48:13.401805  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:48:13.401810  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:48:13.401857  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:48:13.428959  443658 cri.go:89] found id: ""
	I1014 19:48:13.428976  443658 logs.go:282] 0 containers: []
	W1014 19:48:13.428983  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:48:13.428988  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:48:13.429047  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:48:13.457247  443658 cri.go:89] found id: ""
	I1014 19:48:13.457264  443658 logs.go:282] 0 containers: []
	W1014 19:48:13.457271  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:48:13.457276  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:48:13.457324  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:48:13.483816  443658 cri.go:89] found id: ""
	I1014 19:48:13.483834  443658 logs.go:282] 0 containers: []
	W1014 19:48:13.483841  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:48:13.483849  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:48:13.483860  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:48:13.551788  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:48:13.551811  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:48:13.569457  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:48:13.569478  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:48:13.627267  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:48:13.619783    7474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:13.620394    7474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:13.621969    7474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:13.622387    7474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:13.623926    7474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:48:13.619783    7474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:13.620394    7474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:13.621969    7474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:13.622387    7474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:13.623926    7474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:48:13.627279  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:48:13.627289  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:48:13.691177  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:48:13.691201  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:48:16.221827  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:48:16.233209  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:48:16.233277  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:48:16.259929  443658 cri.go:89] found id: ""
	I1014 19:48:16.259948  443658 logs.go:282] 0 containers: []
	W1014 19:48:16.259959  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:48:16.259966  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:48:16.260018  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:48:16.287292  443658 cri.go:89] found id: ""
	I1014 19:48:16.287310  443658 logs.go:282] 0 containers: []
	W1014 19:48:16.287318  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:48:16.287326  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:48:16.287381  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:48:16.314495  443658 cri.go:89] found id: ""
	I1014 19:48:16.314516  443658 logs.go:282] 0 containers: []
	W1014 19:48:16.314525  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:48:16.314531  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:48:16.314602  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:48:16.340741  443658 cri.go:89] found id: ""
	I1014 19:48:16.340772  443658 logs.go:282] 0 containers: []
	W1014 19:48:16.340785  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:48:16.340791  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:48:16.340839  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:48:16.368210  443658 cri.go:89] found id: ""
	I1014 19:48:16.368225  443658 logs.go:282] 0 containers: []
	W1014 19:48:16.368233  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:48:16.368239  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:48:16.368289  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:48:16.394831  443658 cri.go:89] found id: ""
	I1014 19:48:16.394848  443658 logs.go:282] 0 containers: []
	W1014 19:48:16.394858  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:48:16.394865  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:48:16.394922  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:48:16.421594  443658 cri.go:89] found id: ""
	I1014 19:48:16.421614  443658 logs.go:282] 0 containers: []
	W1014 19:48:16.421622  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:48:16.421631  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:48:16.421641  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:48:16.491514  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:48:16.491538  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:48:16.509528  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:48:16.509549  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:48:16.567026  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:48:16.559396    7591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:16.560067    7591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:16.561808    7591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:16.562264    7591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:16.563791    7591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:48:16.559396    7591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:16.560067    7591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:16.561808    7591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:16.562264    7591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:16.563791    7591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:48:16.567039  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:48:16.567050  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:48:16.633705  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:48:16.633729  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:48:19.170176  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:48:19.181543  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:48:19.181597  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:48:19.207369  443658 cri.go:89] found id: ""
	I1014 19:48:19.207386  443658 logs.go:282] 0 containers: []
	W1014 19:48:19.207392  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:48:19.207397  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:48:19.207441  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:48:19.233860  443658 cri.go:89] found id: ""
	I1014 19:48:19.233881  443658 logs.go:282] 0 containers: []
	W1014 19:48:19.233890  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:48:19.233896  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:48:19.233956  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:48:19.260261  443658 cri.go:89] found id: ""
	I1014 19:48:19.260279  443658 logs.go:282] 0 containers: []
	W1014 19:48:19.260287  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:48:19.260293  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:48:19.260346  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:48:19.287494  443658 cri.go:89] found id: ""
	I1014 19:48:19.287515  443658 logs.go:282] 0 containers: []
	W1014 19:48:19.287525  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:48:19.287532  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:48:19.287584  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:48:19.313774  443658 cri.go:89] found id: ""
	I1014 19:48:19.313792  443658 logs.go:282] 0 containers: []
	W1014 19:48:19.313798  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:48:19.313803  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:48:19.313860  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:48:19.340266  443658 cri.go:89] found id: ""
	I1014 19:48:19.340286  443658 logs.go:282] 0 containers: []
	W1014 19:48:19.340296  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:48:19.340305  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:48:19.340371  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:48:19.367478  443658 cri.go:89] found id: ""
	I1014 19:48:19.367494  443658 logs.go:282] 0 containers: []
	W1014 19:48:19.367501  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:48:19.367510  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:48:19.367519  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:48:19.434384  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:48:19.434408  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:48:19.453201  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:48:19.453221  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:48:19.511748  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:48:19.504301    7733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:19.504947    7733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:19.506543    7733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:19.506980    7733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:19.508451    7733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:48:19.504301    7733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:19.504947    7733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:19.506543    7733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:19.506980    7733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:19.508451    7733 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:48:19.511771  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:48:19.511786  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:48:19.572669  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:48:19.572694  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:48:22.104359  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:48:22.116056  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:48:22.116114  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:48:22.143506  443658 cri.go:89] found id: ""
	I1014 19:48:22.143526  443658 logs.go:282] 0 containers: []
	W1014 19:48:22.143535  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:48:22.143542  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:48:22.143604  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:48:22.171275  443658 cri.go:89] found id: ""
	I1014 19:48:22.171293  443658 logs.go:282] 0 containers: []
	W1014 19:48:22.171300  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:48:22.171304  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:48:22.171354  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:48:22.200946  443658 cri.go:89] found id: ""
	I1014 19:48:22.200963  443658 logs.go:282] 0 containers: []
	W1014 19:48:22.200969  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:48:22.200975  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:48:22.201021  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:48:22.229821  443658 cri.go:89] found id: ""
	I1014 19:48:22.229838  443658 logs.go:282] 0 containers: []
	W1014 19:48:22.229848  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:48:22.229853  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:48:22.229908  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:48:22.257470  443658 cri.go:89] found id: ""
	I1014 19:48:22.257490  443658 logs.go:282] 0 containers: []
	W1014 19:48:22.257501  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:48:22.257507  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:48:22.257561  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:48:22.286561  443658 cri.go:89] found id: ""
	I1014 19:48:22.286582  443658 logs.go:282] 0 containers: []
	W1014 19:48:22.286590  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:48:22.286640  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:48:22.286708  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:48:22.314642  443658 cri.go:89] found id: ""
	I1014 19:48:22.314659  443658 logs.go:282] 0 containers: []
	W1014 19:48:22.314665  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:48:22.314673  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:48:22.314703  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:48:22.375334  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:48:22.367894    7851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:22.368440    7851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:22.370076    7851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:22.370561    7851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:22.372196    7851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:48:22.367894    7851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:22.368440    7851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:22.370076    7851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:22.370561    7851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:22.372196    7851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:48:22.375355  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:48:22.375369  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:48:22.437367  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:48:22.437393  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:48:22.467945  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:48:22.467963  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:48:22.538691  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:48:22.538715  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:48:25.057422  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:48:25.069417  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:48:25.069480  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:48:25.097308  443658 cri.go:89] found id: ""
	I1014 19:48:25.097327  443658 logs.go:282] 0 containers: []
	W1014 19:48:25.097334  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:48:25.097340  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:48:25.097399  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:48:25.124869  443658 cri.go:89] found id: ""
	I1014 19:48:25.124888  443658 logs.go:282] 0 containers: []
	W1014 19:48:25.124897  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:48:25.124902  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:48:25.124956  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:48:25.151745  443658 cri.go:89] found id: ""
	I1014 19:48:25.151777  443658 logs.go:282] 0 containers: []
	W1014 19:48:25.151788  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:48:25.151794  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:48:25.151851  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:48:25.178827  443658 cri.go:89] found id: ""
	I1014 19:48:25.178847  443658 logs.go:282] 0 containers: []
	W1014 19:48:25.178857  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:48:25.178864  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:48:25.178919  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:48:25.207030  443658 cri.go:89] found id: ""
	I1014 19:48:25.207048  443658 logs.go:282] 0 containers: []
	W1014 19:48:25.207055  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:48:25.207060  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:48:25.207115  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:48:25.234277  443658 cri.go:89] found id: ""
	I1014 19:48:25.234295  443658 logs.go:282] 0 containers: []
	W1014 19:48:25.234302  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:48:25.234307  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:48:25.234351  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:48:25.260062  443658 cri.go:89] found id: ""
	I1014 19:48:25.260079  443658 logs.go:282] 0 containers: []
	W1014 19:48:25.260085  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:48:25.260094  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:48:25.260105  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:48:25.328418  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:48:25.328443  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:48:25.346610  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:48:25.346630  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:48:25.405353  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:48:25.397912    7974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:25.398394    7974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:25.400014    7974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:25.400430    7974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:25.401975    7974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:48:25.397912    7974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:25.398394    7974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:25.400014    7974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:25.400430    7974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:25.401975    7974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:48:25.405366  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:48:25.405378  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:48:25.466377  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:48:25.466403  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:48:27.999561  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:48:28.010893  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:48:28.010948  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:48:28.037673  443658 cri.go:89] found id: ""
	I1014 19:48:28.037692  443658 logs.go:282] 0 containers: []
	W1014 19:48:28.037699  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:48:28.037720  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:48:28.037786  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:48:28.065810  443658 cri.go:89] found id: ""
	I1014 19:48:28.065828  443658 logs.go:282] 0 containers: []
	W1014 19:48:28.065835  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:48:28.065840  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:48:28.065891  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:48:28.093517  443658 cri.go:89] found id: ""
	I1014 19:48:28.093535  443658 logs.go:282] 0 containers: []
	W1014 19:48:28.093542  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:48:28.093547  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:48:28.093594  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:48:28.120885  443658 cri.go:89] found id: ""
	I1014 19:48:28.120907  443658 logs.go:282] 0 containers: []
	W1014 19:48:28.120917  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:48:28.120924  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:48:28.120991  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:48:28.151601  443658 cri.go:89] found id: ""
	I1014 19:48:28.151621  443658 logs.go:282] 0 containers: []
	W1014 19:48:28.151632  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:48:28.151677  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:48:28.151731  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:48:28.179686  443658 cri.go:89] found id: ""
	I1014 19:48:28.179707  443658 logs.go:282] 0 containers: []
	W1014 19:48:28.179718  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:48:28.179725  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:48:28.179796  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:48:28.207048  443658 cri.go:89] found id: ""
	I1014 19:48:28.207065  443658 logs.go:282] 0 containers: []
	W1014 19:48:28.207073  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:48:28.207081  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:48:28.207092  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:48:28.273826  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:48:28.273858  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:48:28.291974  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:48:28.291996  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:48:28.350599  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:48:28.343032    8094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:28.344089    8094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:28.344502    8094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:28.346102    8094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:28.346541    8094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:48:28.343032    8094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:28.344089    8094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:28.344502    8094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:28.346102    8094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:28.346541    8094 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:48:28.350610  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:48:28.350620  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:48:28.412963  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:48:28.412999  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:48:30.943653  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:48:30.954861  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:48:30.954918  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:48:30.982663  443658 cri.go:89] found id: ""
	I1014 19:48:30.982687  443658 logs.go:282] 0 containers: []
	W1014 19:48:30.982697  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:48:30.982705  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:48:30.982790  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:48:31.010956  443658 cri.go:89] found id: ""
	I1014 19:48:31.010972  443658 logs.go:282] 0 containers: []
	W1014 19:48:31.010982  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:48:31.010988  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:48:31.011044  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:48:31.037820  443658 cri.go:89] found id: ""
	I1014 19:48:31.037835  443658 logs.go:282] 0 containers: []
	W1014 19:48:31.037845  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:48:31.037851  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:48:31.037908  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:48:31.064198  443658 cri.go:89] found id: ""
	I1014 19:48:31.064219  443658 logs.go:282] 0 containers: []
	W1014 19:48:31.064229  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:48:31.064237  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:48:31.064290  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:48:31.090978  443658 cri.go:89] found id: ""
	I1014 19:48:31.091014  443658 logs.go:282] 0 containers: []
	W1014 19:48:31.091025  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:48:31.091031  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:48:31.091085  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:48:31.119501  443658 cri.go:89] found id: ""
	I1014 19:48:31.119519  443658 logs.go:282] 0 containers: []
	W1014 19:48:31.119526  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:48:31.119531  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:48:31.119578  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:48:31.147180  443658 cri.go:89] found id: ""
	I1014 19:48:31.147202  443658 logs.go:282] 0 containers: []
	W1014 19:48:31.147212  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:48:31.147223  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:48:31.147235  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:48:31.215950  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:48:31.215975  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:48:31.234800  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:48:31.234824  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:48:31.293858  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:48:31.286222    8225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:31.286789    8225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:31.288416    8225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:31.288945    8225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:31.290474    8225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:48:31.286222    8225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:31.286789    8225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:31.288416    8225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:31.288945    8225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:31.290474    8225 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:48:31.293875  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:48:31.293886  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:48:31.357651  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:48:31.357679  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:48:33.890973  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:48:33.903698  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:48:33.903750  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:48:33.930766  443658 cri.go:89] found id: ""
	I1014 19:48:33.930786  443658 logs.go:282] 0 containers: []
	W1014 19:48:33.930793  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:48:33.930798  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:48:33.930850  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:48:33.958613  443658 cri.go:89] found id: ""
	I1014 19:48:33.958634  443658 logs.go:282] 0 containers: []
	W1014 19:48:33.958644  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:48:33.958652  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:48:33.958714  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:48:33.985879  443658 cri.go:89] found id: ""
	I1014 19:48:33.985900  443658 logs.go:282] 0 containers: []
	W1014 19:48:33.985908  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:48:33.985913  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:48:33.985969  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:48:34.014311  443658 cri.go:89] found id: ""
	I1014 19:48:34.014330  443658 logs.go:282] 0 containers: []
	W1014 19:48:34.014338  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:48:34.014344  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:48:34.014406  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:48:34.042331  443658 cri.go:89] found id: ""
	I1014 19:48:34.042352  443658 logs.go:282] 0 containers: []
	W1014 19:48:34.042361  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:48:34.042369  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:48:34.042432  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:48:34.070428  443658 cri.go:89] found id: ""
	I1014 19:48:34.070446  443658 logs.go:282] 0 containers: []
	W1014 19:48:34.070456  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:48:34.070463  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:48:34.070517  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:48:34.097884  443658 cri.go:89] found id: ""
	I1014 19:48:34.097903  443658 logs.go:282] 0 containers: []
	W1014 19:48:34.097921  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:48:34.097931  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:48:34.097948  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:48:34.157332  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:48:34.149617    8349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:34.150366    8349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:34.152026    8349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:34.152566    8349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:34.153919    8349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:48:34.149617    8349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:34.150366    8349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:34.152026    8349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:34.152566    8349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:34.153919    8349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:48:34.157346  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:48:34.157361  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:48:34.220371  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:48:34.220398  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:48:34.250307  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:48:34.250325  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:48:34.315972  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:48:34.315994  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:48:36.835436  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:48:36.846681  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:48:36.846733  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:48:36.873365  443658 cri.go:89] found id: ""
	I1014 19:48:36.873381  443658 logs.go:282] 0 containers: []
	W1014 19:48:36.873389  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:48:36.873394  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:48:36.873447  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:48:36.900441  443658 cri.go:89] found id: ""
	I1014 19:48:36.900458  443658 logs.go:282] 0 containers: []
	W1014 19:48:36.900464  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:48:36.900469  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:48:36.900528  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:48:36.928334  443658 cri.go:89] found id: ""
	I1014 19:48:36.928352  443658 logs.go:282] 0 containers: []
	W1014 19:48:36.928359  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:48:36.928364  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:48:36.928432  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:48:36.955215  443658 cri.go:89] found id: ""
	I1014 19:48:36.955234  443658 logs.go:282] 0 containers: []
	W1014 19:48:36.955244  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:48:36.955249  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:48:36.955304  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:48:36.982183  443658 cri.go:89] found id: ""
	I1014 19:48:36.982201  443658 logs.go:282] 0 containers: []
	W1014 19:48:36.982208  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:48:36.982213  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:48:36.982270  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:48:37.009766  443658 cri.go:89] found id: ""
	I1014 19:48:37.009788  443658 logs.go:282] 0 containers: []
	W1014 19:48:37.009798  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:48:37.009803  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:48:37.009852  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:48:37.036432  443658 cri.go:89] found id: ""
	I1014 19:48:37.036454  443658 logs.go:282] 0 containers: []
	W1014 19:48:37.036464  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:48:37.036474  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:48:37.036484  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:48:37.101021  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:48:37.101045  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:48:37.132706  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:48:37.132724  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:48:37.200337  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:48:37.200365  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:48:37.218525  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:48:37.218545  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:48:37.279294  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:48:37.271380    8491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:37.272016    8491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:37.273706    8491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:37.274226    8491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:37.275831    8491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:48:37.271380    8491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:37.272016    8491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:37.273706    8491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:37.274226    8491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:37.275831    8491 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:48:39.779639  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:48:39.791242  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:48:39.791305  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:48:39.817960  443658 cri.go:89] found id: ""
	I1014 19:48:39.817977  443658 logs.go:282] 0 containers: []
	W1014 19:48:39.817984  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:48:39.817989  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:48:39.818038  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:48:39.845643  443658 cri.go:89] found id: ""
	I1014 19:48:39.845661  443658 logs.go:282] 0 containers: []
	W1014 19:48:39.845668  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:48:39.845673  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:48:39.845724  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:48:39.872711  443658 cri.go:89] found id: ""
	I1014 19:48:39.872727  443658 logs.go:282] 0 containers: []
	W1014 19:48:39.872734  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:48:39.872738  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:48:39.872815  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:48:39.900683  443658 cri.go:89] found id: ""
	I1014 19:48:39.900705  443658 logs.go:282] 0 containers: []
	W1014 19:48:39.900714  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:48:39.900719  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:48:39.900807  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:48:39.929509  443658 cri.go:89] found id: ""
	I1014 19:48:39.929529  443658 logs.go:282] 0 containers: []
	W1014 19:48:39.929540  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:48:39.929546  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:48:39.929599  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:48:39.955582  443658 cri.go:89] found id: ""
	I1014 19:48:39.955598  443658 logs.go:282] 0 containers: []
	W1014 19:48:39.955605  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:48:39.955610  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:48:39.955657  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:48:39.983710  443658 cri.go:89] found id: ""
	I1014 19:48:39.983727  443658 logs.go:282] 0 containers: []
	W1014 19:48:39.983736  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:48:39.983744  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:48:39.983782  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:48:40.052784  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:48:40.052811  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:48:40.070963  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:48:40.070983  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:48:40.129639  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:48:40.122787    8591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:40.123371    8591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:40.124932    8591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:40.125359    8591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:40.126495    8591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:48:40.122787    8591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:40.123371    8591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:40.124932    8591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:40.125359    8591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:40.126495    8591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:48:40.129685  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:48:40.129697  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:48:40.191333  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:48:40.191359  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:48:42.723817  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:48:42.735282  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:48:42.735333  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:48:42.762376  443658 cri.go:89] found id: ""
	I1014 19:48:42.762395  443658 logs.go:282] 0 containers: []
	W1014 19:48:42.762402  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:48:42.762407  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:48:42.762455  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:48:42.789118  443658 cri.go:89] found id: ""
	I1014 19:48:42.789136  443658 logs.go:282] 0 containers: []
	W1014 19:48:42.789142  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:48:42.789147  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:48:42.789194  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:48:42.816692  443658 cri.go:89] found id: ""
	I1014 19:48:42.816709  443658 logs.go:282] 0 containers: []
	W1014 19:48:42.816717  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:48:42.816721  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:48:42.816787  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:48:42.844094  443658 cri.go:89] found id: ""
	I1014 19:48:42.844111  443658 logs.go:282] 0 containers: []
	W1014 19:48:42.844117  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:48:42.844122  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:48:42.844169  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:48:42.871946  443658 cri.go:89] found id: ""
	I1014 19:48:42.871964  443658 logs.go:282] 0 containers: []
	W1014 19:48:42.871971  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:48:42.871975  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:48:42.872038  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:48:42.899614  443658 cri.go:89] found id: ""
	I1014 19:48:42.899632  443658 logs.go:282] 0 containers: []
	W1014 19:48:42.899638  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:48:42.899643  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:48:42.899689  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:48:42.927253  443658 cri.go:89] found id: ""
	I1014 19:48:42.927269  443658 logs.go:282] 0 containers: []
	W1014 19:48:42.927277  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:48:42.927285  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:48:42.927301  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:48:42.994077  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:48:42.994105  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:48:43.012747  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:48:43.012777  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:48:43.071125  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:48:43.063880    8721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:43.064444    8721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:43.066049    8721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:43.066536    8721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:43.068056    8721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:48:43.063880    8721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:43.064444    8721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:43.066049    8721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:43.066536    8721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:43.068056    8721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:48:43.071145  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:48:43.071157  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:48:43.136102  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:48:43.136125  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:48:45.668732  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:48:45.679980  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:48:45.680041  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:48:45.708000  443658 cri.go:89] found id: ""
	I1014 19:48:45.708030  443658 logs.go:282] 0 containers: []
	W1014 19:48:45.708040  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:48:45.708046  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:48:45.708093  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:48:45.736452  443658 cri.go:89] found id: ""
	I1014 19:48:45.736530  443658 logs.go:282] 0 containers: []
	W1014 19:48:45.736542  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:48:45.736548  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:48:45.736603  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:48:45.764163  443658 cri.go:89] found id: ""
	I1014 19:48:45.764184  443658 logs.go:282] 0 containers: []
	W1014 19:48:45.764194  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:48:45.764201  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:48:45.764259  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:48:45.791827  443658 cri.go:89] found id: ""
	I1014 19:48:45.791842  443658 logs.go:282] 0 containers: []
	W1014 19:48:45.791848  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:48:45.791854  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:48:45.791912  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:48:45.819509  443658 cri.go:89] found id: ""
	I1014 19:48:45.819529  443658 logs.go:282] 0 containers: []
	W1014 19:48:45.819540  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:48:45.819547  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:48:45.819609  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:48:45.847227  443658 cri.go:89] found id: ""
	I1014 19:48:45.847248  443658 logs.go:282] 0 containers: []
	W1014 19:48:45.847259  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:48:45.847266  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:48:45.847329  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:48:45.873974  443658 cri.go:89] found id: ""
	I1014 19:48:45.873995  443658 logs.go:282] 0 containers: []
	W1014 19:48:45.874004  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:48:45.874015  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:48:45.874030  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:48:45.932513  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:48:45.925000    8844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:45.925641    8844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:45.927410    8844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:45.927848    8844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:45.929196    8844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:48:45.925000    8844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:45.925641    8844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:45.927410    8844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:45.927848    8844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:45.929196    8844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:48:45.932528  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:48:45.932545  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:48:45.993477  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:48:45.993504  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:48:46.025620  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:48:46.025638  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:48:46.097209  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:48:46.097236  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:48:48.617067  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:48:48.628616  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:48:48.628683  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:48:48.655361  443658 cri.go:89] found id: ""
	I1014 19:48:48.655377  443658 logs.go:282] 0 containers: []
	W1014 19:48:48.655388  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:48:48.655395  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:48:48.655458  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:48:48.681992  443658 cri.go:89] found id: ""
	I1014 19:48:48.682008  443658 logs.go:282] 0 containers: []
	W1014 19:48:48.682015  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:48:48.682020  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:48:48.682065  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:48:48.708630  443658 cri.go:89] found id: ""
	I1014 19:48:48.708647  443658 logs.go:282] 0 containers: []
	W1014 19:48:48.708654  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:48:48.708658  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:48:48.708726  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:48:48.735832  443658 cri.go:89] found id: ""
	I1014 19:48:48.735848  443658 logs.go:282] 0 containers: []
	W1014 19:48:48.735859  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:48:48.735863  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:48:48.735921  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:48:48.763984  443658 cri.go:89] found id: ""
	I1014 19:48:48.763999  443658 logs.go:282] 0 containers: []
	W1014 19:48:48.764017  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:48:48.764022  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:48:48.764074  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:48:48.790052  443658 cri.go:89] found id: ""
	I1014 19:48:48.790072  443658 logs.go:282] 0 containers: []
	W1014 19:48:48.790081  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:48:48.790088  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:48:48.790137  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:48:48.816830  443658 cri.go:89] found id: ""
	I1014 19:48:48.816847  443658 logs.go:282] 0 containers: []
	W1014 19:48:48.816854  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:48:48.816863  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:48:48.816874  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:48:48.885983  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:48:48.886007  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:48:48.904564  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:48:48.904584  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:48:48.963221  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:48:48.955419    8972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:48.956384    8972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:48.957942    8972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:48.958423    8972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:48.960005    8972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:48:48.955419    8972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:48.956384    8972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:48.957942    8972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:48.958423    8972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:48.960005    8972 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:48:48.963232  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:48:48.963245  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:48:49.024076  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:48:49.024100  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:48:51.555915  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:48:51.567493  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:48:51.567566  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:48:51.593927  443658 cri.go:89] found id: ""
	I1014 19:48:51.593943  443658 logs.go:282] 0 containers: []
	W1014 19:48:51.593950  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:48:51.593955  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:48:51.594000  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:48:51.622234  443658 cri.go:89] found id: ""
	I1014 19:48:51.622250  443658 logs.go:282] 0 containers: []
	W1014 19:48:51.622257  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:48:51.622261  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:48:51.622306  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:48:51.648637  443658 cri.go:89] found id: ""
	I1014 19:48:51.648654  443658 logs.go:282] 0 containers: []
	W1014 19:48:51.648660  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:48:51.648666  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:48:51.648730  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:48:51.675538  443658 cri.go:89] found id: ""
	I1014 19:48:51.675559  443658 logs.go:282] 0 containers: []
	W1014 19:48:51.675570  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:48:51.675577  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:48:51.675631  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:48:51.701640  443658 cri.go:89] found id: ""
	I1014 19:48:51.701657  443658 logs.go:282] 0 containers: []
	W1014 19:48:51.701664  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:48:51.701670  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:48:51.701730  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:48:51.729739  443658 cri.go:89] found id: ""
	I1014 19:48:51.729770  443658 logs.go:282] 0 containers: []
	W1014 19:48:51.729782  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:48:51.729789  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:48:51.729839  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:48:51.757162  443658 cri.go:89] found id: ""
	I1014 19:48:51.757184  443658 logs.go:282] 0 containers: []
	W1014 19:48:51.757195  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:48:51.757206  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:48:51.757225  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:48:51.825383  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:48:51.825408  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:48:51.843441  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:48:51.843462  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:48:51.901599  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:48:51.893806    9093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:51.894477    9093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:51.896214    9093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:51.896786    9093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:51.898462    9093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:48:51.893806    9093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:51.894477    9093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:51.896214    9093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:51.896786    9093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:51.898462    9093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:48:51.901609  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:48:51.901621  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:48:51.963670  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:48:51.963696  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:48:54.494451  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:48:54.505690  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:48:54.505748  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:48:54.532934  443658 cri.go:89] found id: ""
	I1014 19:48:54.532956  443658 logs.go:282] 0 containers: []
	W1014 19:48:54.532966  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:48:54.532973  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:48:54.533035  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:48:54.560665  443658 cri.go:89] found id: ""
	I1014 19:48:54.560682  443658 logs.go:282] 0 containers: []
	W1014 19:48:54.560689  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:48:54.560693  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:48:54.560746  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:48:54.587851  443658 cri.go:89] found id: ""
	I1014 19:48:54.587871  443658 logs.go:282] 0 containers: []
	W1014 19:48:54.587882  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:48:54.587889  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:48:54.587939  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:48:54.615307  443658 cri.go:89] found id: ""
	I1014 19:48:54.615324  443658 logs.go:282] 0 containers: []
	W1014 19:48:54.615331  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:48:54.615336  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:48:54.615381  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:48:54.642900  443658 cri.go:89] found id: ""
	I1014 19:48:54.642916  443658 logs.go:282] 0 containers: []
	W1014 19:48:54.642922  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:48:54.642928  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:48:54.642987  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:48:54.670686  443658 cri.go:89] found id: ""
	I1014 19:48:54.670702  443658 logs.go:282] 0 containers: []
	W1014 19:48:54.670710  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:48:54.670715  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:48:54.670784  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:48:54.697226  443658 cri.go:89] found id: ""
	I1014 19:48:54.697246  443658 logs.go:282] 0 containers: []
	W1014 19:48:54.697255  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:48:54.697266  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:48:54.697280  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:48:54.759777  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:48:54.759804  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:48:54.790599  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:48:54.790617  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:48:54.864057  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:48:54.864090  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:48:54.882103  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:48:54.882128  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:48:54.942079  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:48:54.934581    9243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:54.935124    9243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:54.936659    9243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:54.937300    9243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:54.938843    9243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:48:54.934581    9243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:54.935124    9243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:54.936659    9243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:54.937300    9243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:54.938843    9243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:48:57.443958  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:48:57.455537  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:48:57.455596  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:48:57.482660  443658 cri.go:89] found id: ""
	I1014 19:48:57.482684  443658 logs.go:282] 0 containers: []
	W1014 19:48:57.482694  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:48:57.482704  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:48:57.482783  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:48:57.510445  443658 cri.go:89] found id: ""
	I1014 19:48:57.510461  443658 logs.go:282] 0 containers: []
	W1014 19:48:57.510467  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:48:57.510471  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:48:57.510523  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:48:57.537439  443658 cri.go:89] found id: ""
	I1014 19:48:57.537456  443658 logs.go:282] 0 containers: []
	W1014 19:48:57.537464  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:48:57.537469  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:48:57.537515  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:48:57.564369  443658 cri.go:89] found id: ""
	I1014 19:48:57.564386  443658 logs.go:282] 0 containers: []
	W1014 19:48:57.564394  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:48:57.564401  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:48:57.564455  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:48:57.591584  443658 cri.go:89] found id: ""
	I1014 19:48:57.591601  443658 logs.go:282] 0 containers: []
	W1014 19:48:57.591607  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:48:57.591612  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:48:57.591657  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:48:57.620996  443658 cri.go:89] found id: ""
	I1014 19:48:57.621016  443658 logs.go:282] 0 containers: []
	W1014 19:48:57.621026  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:48:57.621033  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:48:57.621096  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:48:57.650978  443658 cri.go:89] found id: ""
	I1014 19:48:57.650994  443658 logs.go:282] 0 containers: []
	W1014 19:48:57.651001  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:48:57.651010  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:48:57.651022  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:48:57.709879  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:48:57.701644    9354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:57.702204    9354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:57.704523    9354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:57.705023    9354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:57.706491    9354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:48:57.701644    9354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:57.702204    9354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:57.704523    9354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:57.705023    9354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:48:57.706491    9354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:48:57.709895  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:48:57.709906  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:48:57.773086  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:48:57.773110  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:48:57.804357  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:48:57.804375  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:48:57.876116  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:48:57.876141  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:49:00.397550  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:49:00.408833  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:49:00.408898  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:49:00.436551  443658 cri.go:89] found id: ""
	I1014 19:49:00.436572  443658 logs.go:282] 0 containers: []
	W1014 19:49:00.436580  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:49:00.436586  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:49:00.436643  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:49:00.463380  443658 cri.go:89] found id: ""
	I1014 19:49:00.463398  443658 logs.go:282] 0 containers: []
	W1014 19:49:00.463406  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:49:00.463411  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:49:00.463464  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:49:00.489936  443658 cri.go:89] found id: ""
	I1014 19:49:00.489953  443658 logs.go:282] 0 containers: []
	W1014 19:49:00.489961  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:49:00.489967  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:49:00.490025  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:49:00.517733  443658 cri.go:89] found id: ""
	I1014 19:49:00.517777  443658 logs.go:282] 0 containers: []
	W1014 19:49:00.517789  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:49:00.517799  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:49:00.517853  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:49:00.545738  443658 cri.go:89] found id: ""
	I1014 19:49:00.545770  443658 logs.go:282] 0 containers: []
	W1014 19:49:00.545782  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:49:00.545789  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:49:00.545847  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:49:00.572980  443658 cri.go:89] found id: ""
	I1014 19:49:00.572998  443658 logs.go:282] 0 containers: []
	W1014 19:49:00.573007  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:49:00.573013  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:49:00.573073  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:49:00.601579  443658 cri.go:89] found id: ""
	I1014 19:49:00.601596  443658 logs.go:282] 0 containers: []
	W1014 19:49:00.601608  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:49:00.601620  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:49:00.601634  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:49:00.664237  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:49:00.664264  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:49:00.696881  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:49:00.696906  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:49:00.769175  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:49:00.769201  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:49:00.787483  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:49:00.787504  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:49:00.845998  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:49:00.838686    9495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:00.839226    9495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:00.840825    9495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:00.841284    9495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:00.842865    9495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:49:00.838686    9495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:00.839226    9495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:00.840825    9495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:00.841284    9495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:00.842865    9495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:49:03.347716  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:49:03.359494  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:49:03.359550  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:49:03.387814  443658 cri.go:89] found id: ""
	I1014 19:49:03.387833  443658 logs.go:282] 0 containers: []
	W1014 19:49:03.387842  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:49:03.387848  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:49:03.387913  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:49:03.416379  443658 cri.go:89] found id: ""
	I1014 19:49:03.416400  443658 logs.go:282] 0 containers: []
	W1014 19:49:03.416410  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:49:03.416415  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:49:03.416466  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:49:03.444338  443658 cri.go:89] found id: ""
	I1014 19:49:03.444355  443658 logs.go:282] 0 containers: []
	W1014 19:49:03.444364  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:49:03.444368  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:49:03.444429  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:49:03.472283  443658 cri.go:89] found id: ""
	I1014 19:49:03.472299  443658 logs.go:282] 0 containers: []
	W1014 19:49:03.472306  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:49:03.472311  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:49:03.472368  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:49:03.499924  443658 cri.go:89] found id: ""
	I1014 19:49:03.499940  443658 logs.go:282] 0 containers: []
	W1014 19:49:03.499947  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:49:03.499951  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:49:03.500014  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:49:03.528675  443658 cri.go:89] found id: ""
	I1014 19:49:03.528691  443658 logs.go:282] 0 containers: []
	W1014 19:49:03.528698  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:49:03.528703  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:49:03.528780  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:49:03.555961  443658 cri.go:89] found id: ""
	I1014 19:49:03.555979  443658 logs.go:282] 0 containers: []
	W1014 19:49:03.555986  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:49:03.555995  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:49:03.556009  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:49:03.615676  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:49:03.608021    9591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:03.608674    9591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:03.610310    9591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:03.610821    9591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:03.612076    9591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:49:03.608021    9591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:03.608674    9591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:03.610310    9591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:03.610821    9591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:03.612076    9591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:49:03.615687  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:49:03.615699  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:49:03.680122  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:49:03.680151  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:49:03.712091  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:49:03.712109  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:49:03.779370  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:49:03.779396  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:49:06.297908  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:49:06.309773  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:49:06.309831  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:49:06.337910  443658 cri.go:89] found id: ""
	I1014 19:49:06.337930  443658 logs.go:282] 0 containers: []
	W1014 19:49:06.337939  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:49:06.337946  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:49:06.337996  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:49:06.366075  443658 cri.go:89] found id: ""
	I1014 19:49:06.366090  443658 logs.go:282] 0 containers: []
	W1014 19:49:06.366097  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:49:06.366102  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:49:06.366149  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:49:06.393203  443658 cri.go:89] found id: ""
	I1014 19:49:06.393219  443658 logs.go:282] 0 containers: []
	W1014 19:49:06.393225  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:49:06.393230  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:49:06.393274  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:49:06.421220  443658 cri.go:89] found id: ""
	I1014 19:49:06.421240  443658 logs.go:282] 0 containers: []
	W1014 19:49:06.421250  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:49:06.421257  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:49:06.421322  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:49:06.449354  443658 cri.go:89] found id: ""
	I1014 19:49:06.449373  443658 logs.go:282] 0 containers: []
	W1014 19:49:06.449382  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:49:06.449388  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:49:06.449450  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:49:06.476432  443658 cri.go:89] found id: ""
	I1014 19:49:06.476450  443658 logs.go:282] 0 containers: []
	W1014 19:49:06.476459  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:49:06.476467  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:49:06.476536  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:49:06.504006  443658 cri.go:89] found id: ""
	I1014 19:49:06.504031  443658 logs.go:282] 0 containers: []
	W1014 19:49:06.504038  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:49:06.504047  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:49:06.504057  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:49:06.533877  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:49:06.533894  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:49:06.600597  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:49:06.600622  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:49:06.619193  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:49:06.619216  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:49:06.680047  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:49:06.672165    9737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:06.672728    9737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:06.674412    9737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:06.675003    9737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:06.676679    9737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:49:06.672165    9737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:06.672728    9737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:06.674412    9737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:06.675003    9737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:06.676679    9737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:49:06.680057  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:49:06.680069  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:49:09.242233  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:49:09.253413  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:49:09.253465  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:49:09.280670  443658 cri.go:89] found id: ""
	I1014 19:49:09.280688  443658 logs.go:282] 0 containers: []
	W1014 19:49:09.280698  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:49:09.280705  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:49:09.280776  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:49:09.307015  443658 cri.go:89] found id: ""
	I1014 19:49:09.307033  443658 logs.go:282] 0 containers: []
	W1014 19:49:09.307043  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:49:09.307049  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:49:09.307104  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:49:09.334276  443658 cri.go:89] found id: ""
	I1014 19:49:09.334296  443658 logs.go:282] 0 containers: []
	W1014 19:49:09.334304  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:49:09.334309  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:49:09.334357  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:49:09.360472  443658 cri.go:89] found id: ""
	I1014 19:49:09.360487  443658 logs.go:282] 0 containers: []
	W1014 19:49:09.360494  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:49:09.360499  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:49:09.360549  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:49:09.388322  443658 cri.go:89] found id: ""
	I1014 19:49:09.388338  443658 logs.go:282] 0 containers: []
	W1014 19:49:09.388345  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:49:09.388349  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:49:09.388396  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:49:09.414924  443658 cri.go:89] found id: ""
	I1014 19:49:09.414944  443658 logs.go:282] 0 containers: []
	W1014 19:49:09.414955  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:49:09.414962  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:49:09.415023  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:49:09.441772  443658 cri.go:89] found id: ""
	I1014 19:49:09.441792  443658 logs.go:282] 0 containers: []
	W1014 19:49:09.441800  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:49:09.441809  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:49:09.441822  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:49:09.509426  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:49:09.509452  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:49:09.527807  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:49:09.527829  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:49:09.587241  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:49:09.579349    9843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:09.579944    9843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:09.582253    9843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:09.582735    9843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:09.583971    9843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:49:09.579349    9843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:09.579944    9843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:09.582253    9843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:09.582735    9843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:09.583971    9843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:49:09.587253  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:49:09.587265  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:49:09.654561  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:49:09.654584  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:49:12.186794  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:49:12.198312  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:49:12.198367  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:49:12.225457  443658 cri.go:89] found id: ""
	I1014 19:49:12.225476  443658 logs.go:282] 0 containers: []
	W1014 19:49:12.225491  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:49:12.225497  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:49:12.225548  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:49:12.253224  443658 cri.go:89] found id: ""
	I1014 19:49:12.253243  443658 logs.go:282] 0 containers: []
	W1014 19:49:12.253251  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:49:12.253256  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:49:12.253317  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:49:12.280591  443658 cri.go:89] found id: ""
	I1014 19:49:12.280610  443658 logs.go:282] 0 containers: []
	W1014 19:49:12.280617  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:49:12.280622  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:49:12.280674  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:49:12.309016  443658 cri.go:89] found id: ""
	I1014 19:49:12.309033  443658 logs.go:282] 0 containers: []
	W1014 19:49:12.309039  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:49:12.309044  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:49:12.309091  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:49:12.337230  443658 cri.go:89] found id: ""
	I1014 19:49:12.337251  443658 logs.go:282] 0 containers: []
	W1014 19:49:12.337260  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:49:12.337267  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:49:12.337336  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:49:12.364682  443658 cri.go:89] found id: ""
	I1014 19:49:12.364728  443658 logs.go:282] 0 containers: []
	W1014 19:49:12.364737  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:49:12.364743  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:49:12.364821  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:49:12.392936  443658 cri.go:89] found id: ""
	I1014 19:49:12.392960  443658 logs.go:282] 0 containers: []
	W1014 19:49:12.392967  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:49:12.392976  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:49:12.392986  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:49:12.452595  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:49:12.444355    9971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:12.444853    9971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:12.446438    9971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:12.447015    9971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:12.449368    9971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:49:12.444355    9971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:12.444853    9971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:12.446438    9971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:12.447015    9971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:12.449368    9971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:49:12.452608  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:49:12.452621  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:49:12.516437  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:49:12.516463  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:49:12.547372  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:49:12.547391  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:49:12.614937  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:49:12.614961  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:49:15.134260  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:49:15.146546  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:49:15.146600  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:49:15.174510  443658 cri.go:89] found id: ""
	I1014 19:49:15.174526  443658 logs.go:282] 0 containers: []
	W1014 19:49:15.174533  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:49:15.174538  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:49:15.174585  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:49:15.202132  443658 cri.go:89] found id: ""
	I1014 19:49:15.202152  443658 logs.go:282] 0 containers: []
	W1014 19:49:15.202162  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:49:15.202169  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:49:15.202226  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:49:15.230616  443658 cri.go:89] found id: ""
	I1014 19:49:15.230633  443658 logs.go:282] 0 containers: []
	W1014 19:49:15.230639  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:49:15.230644  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:49:15.230696  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:49:15.258236  443658 cri.go:89] found id: ""
	I1014 19:49:15.258253  443658 logs.go:282] 0 containers: []
	W1014 19:49:15.258263  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:49:15.258267  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:49:15.258326  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:49:15.286042  443658 cri.go:89] found id: ""
	I1014 19:49:15.286059  443658 logs.go:282] 0 containers: []
	W1014 19:49:15.286066  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:49:15.286072  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:49:15.286134  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:49:15.314815  443658 cri.go:89] found id: ""
	I1014 19:49:15.314833  443658 logs.go:282] 0 containers: []
	W1014 19:49:15.314840  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:49:15.314844  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:49:15.314897  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:49:15.341953  443658 cri.go:89] found id: ""
	I1014 19:49:15.341969  443658 logs.go:282] 0 containers: []
	W1014 19:49:15.341976  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:49:15.341984  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:49:15.341995  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:49:15.412363  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:49:15.412387  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:49:15.430737  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:49:15.430770  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:49:15.492263  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:49:15.483535   10099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:15.484124   10099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:15.485892   10099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:15.486398   10099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:15.489083   10099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:49:15.483535   10099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:15.484124   10099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:15.485892   10099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:15.486398   10099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:15.489083   10099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:49:15.492274  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:49:15.492286  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:49:15.556874  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:49:15.556899  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:49:18.089267  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:49:18.101164  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:49:18.101225  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:49:18.130411  443658 cri.go:89] found id: ""
	I1014 19:49:18.130428  443658 logs.go:282] 0 containers: []
	W1014 19:49:18.130435  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:49:18.130440  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:49:18.130500  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:49:18.157908  443658 cri.go:89] found id: ""
	I1014 19:49:18.157927  443658 logs.go:282] 0 containers: []
	W1014 19:49:18.157938  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:49:18.157943  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:49:18.157997  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:49:18.185537  443658 cri.go:89] found id: ""
	I1014 19:49:18.185560  443658 logs.go:282] 0 containers: []
	W1014 19:49:18.185568  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:49:18.185573  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:49:18.185627  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:49:18.212466  443658 cri.go:89] found id: ""
	I1014 19:49:18.212485  443658 logs.go:282] 0 containers: []
	W1014 19:49:18.212493  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:49:18.212498  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:49:18.212561  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:49:18.239975  443658 cri.go:89] found id: ""
	I1014 19:49:18.239993  443658 logs.go:282] 0 containers: []
	W1014 19:49:18.240000  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:49:18.240005  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:49:18.240056  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:49:18.267082  443658 cri.go:89] found id: ""
	I1014 19:49:18.267101  443658 logs.go:282] 0 containers: []
	W1014 19:49:18.267109  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:49:18.267114  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:49:18.267163  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:49:18.293654  443658 cri.go:89] found id: ""
	I1014 19:49:18.293672  443658 logs.go:282] 0 containers: []
	W1014 19:49:18.293679  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:49:18.293689  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:49:18.293700  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:49:18.363853  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:49:18.363878  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:49:18.383522  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:49:18.383545  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:49:18.442304  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:49:18.435285   10216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:18.435849   10216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:18.437451   10216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:18.437904   10216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:18.438994   10216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:49:18.435285   10216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:18.435849   10216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:18.437451   10216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:18.437904   10216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:18.438994   10216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:49:18.442316  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:49:18.442327  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:49:18.503728  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:49:18.503752  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:49:21.035160  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:49:21.046500  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:49:21.046556  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:49:21.073686  443658 cri.go:89] found id: ""
	I1014 19:49:21.073705  443658 logs.go:282] 0 containers: []
	W1014 19:49:21.073716  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:49:21.073723  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:49:21.073790  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:49:21.100037  443658 cri.go:89] found id: ""
	I1014 19:49:21.100052  443658 logs.go:282] 0 containers: []
	W1014 19:49:21.100059  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:49:21.100064  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:49:21.100107  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:49:21.127167  443658 cri.go:89] found id: ""
	I1014 19:49:21.127183  443658 logs.go:282] 0 containers: []
	W1014 19:49:21.127190  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:49:21.127195  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:49:21.127243  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:49:21.155028  443658 cri.go:89] found id: ""
	I1014 19:49:21.155045  443658 logs.go:282] 0 containers: []
	W1014 19:49:21.155052  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:49:21.155056  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:49:21.155104  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:49:21.182898  443658 cri.go:89] found id: ""
	I1014 19:49:21.182919  443658 logs.go:282] 0 containers: []
	W1014 19:49:21.182926  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:49:21.182931  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:49:21.182981  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:49:21.214304  443658 cri.go:89] found id: ""
	I1014 19:49:21.214321  443658 logs.go:282] 0 containers: []
	W1014 19:49:21.214327  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:49:21.214332  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:49:21.214377  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:49:21.242021  443658 cri.go:89] found id: ""
	I1014 19:49:21.242038  443658 logs.go:282] 0 containers: []
	W1014 19:49:21.242045  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:49:21.242053  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:49:21.242065  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:49:21.259561  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:49:21.259582  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:49:21.319723  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:49:21.312041   10338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:21.312668   10338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:21.314370   10338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:21.314958   10338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:21.316607   10338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:49:21.312041   10338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:21.312668   10338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:21.314370   10338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:21.314958   10338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:21.316607   10338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:49:21.319734  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:49:21.319745  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:49:21.380339  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:49:21.380373  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:49:21.410561  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:49:21.410580  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:49:23.982170  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:49:23.993512  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:49:23.993566  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:49:24.021666  443658 cri.go:89] found id: ""
	I1014 19:49:24.021681  443658 logs.go:282] 0 containers: []
	W1014 19:49:24.021688  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:49:24.021693  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:49:24.021777  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:49:24.048763  443658 cri.go:89] found id: ""
	I1014 19:49:24.048788  443658 logs.go:282] 0 containers: []
	W1014 19:49:24.048799  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:49:24.048806  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:49:24.048868  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:49:24.076823  443658 cri.go:89] found id: ""
	I1014 19:49:24.076845  443658 logs.go:282] 0 containers: []
	W1014 19:49:24.076856  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:49:24.076862  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:49:24.076920  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:49:24.104097  443658 cri.go:89] found id: ""
	I1014 19:49:24.104117  443658 logs.go:282] 0 containers: []
	W1014 19:49:24.104126  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:49:24.104130  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:49:24.104182  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:49:24.130667  443658 cri.go:89] found id: ""
	I1014 19:49:24.130682  443658 logs.go:282] 0 containers: []
	W1014 19:49:24.130691  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:49:24.130696  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:49:24.130747  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:49:24.158412  443658 cri.go:89] found id: ""
	I1014 19:49:24.158429  443658 logs.go:282] 0 containers: []
	W1014 19:49:24.158437  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:49:24.158442  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:49:24.158491  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:49:24.185765  443658 cri.go:89] found id: ""
	I1014 19:49:24.185785  443658 logs.go:282] 0 containers: []
	W1014 19:49:24.185793  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:49:24.185801  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:49:24.185813  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:49:24.244433  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:49:24.236694   10471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:24.237287   10471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:24.238941   10471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:24.239414   10471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:24.240968   10471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:49:24.236694   10471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:24.237287   10471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:24.238941   10471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:24.239414   10471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:24.240968   10471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:49:24.244454  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:49:24.244469  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:49:24.307235  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:49:24.307260  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:49:24.337358  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:49:24.337379  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:49:24.406396  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:49:24.406421  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:49:26.925678  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:49:26.936862  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:49:26.936911  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:49:26.963233  443658 cri.go:89] found id: ""
	I1014 19:49:26.963249  443658 logs.go:282] 0 containers: []
	W1014 19:49:26.963256  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:49:26.963261  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:49:26.963318  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:49:26.989526  443658 cri.go:89] found id: ""
	I1014 19:49:26.989545  443658 logs.go:282] 0 containers: []
	W1014 19:49:26.989553  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:49:26.989558  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:49:26.989606  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:49:27.016445  443658 cri.go:89] found id: ""
	I1014 19:49:27.016461  443658 logs.go:282] 0 containers: []
	W1014 19:49:27.016468  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:49:27.016473  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:49:27.016536  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:49:27.044936  443658 cri.go:89] found id: ""
	I1014 19:49:27.044954  443658 logs.go:282] 0 containers: []
	W1014 19:49:27.044961  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:49:27.044965  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:49:27.045023  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:49:27.071859  443658 cri.go:89] found id: ""
	I1014 19:49:27.071881  443658 logs.go:282] 0 containers: []
	W1014 19:49:27.071891  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:49:27.071898  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:49:27.071964  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:49:27.101404  443658 cri.go:89] found id: ""
	I1014 19:49:27.101421  443658 logs.go:282] 0 containers: []
	W1014 19:49:27.101431  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:49:27.101439  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:49:27.101492  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:49:27.130140  443658 cri.go:89] found id: ""
	I1014 19:49:27.130158  443658 logs.go:282] 0 containers: []
	W1014 19:49:27.130168  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:49:27.130178  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:49:27.130192  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:49:27.191223  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:49:27.183739   10583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:27.184372   10583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:27.185983   10583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:27.186439   10583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:27.188034   10583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:49:27.183739   10583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:27.184372   10583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:27.185983   10583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:27.186439   10583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:27.188034   10583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:49:27.191237  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:49:27.191249  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:49:27.255430  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:49:27.255456  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:49:27.285702  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:49:27.285740  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:49:27.352209  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:49:27.352234  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:49:29.872354  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:49:29.883680  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:49:29.883735  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:49:29.911601  443658 cri.go:89] found id: ""
	I1014 19:49:29.911621  443658 logs.go:282] 0 containers: []
	W1014 19:49:29.911628  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:49:29.911634  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:49:29.911681  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:49:29.940396  443658 cri.go:89] found id: ""
	I1014 19:49:29.940412  443658 logs.go:282] 0 containers: []
	W1014 19:49:29.940419  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:49:29.940424  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:49:29.940471  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:49:29.969195  443658 cri.go:89] found id: ""
	I1014 19:49:29.969213  443658 logs.go:282] 0 containers: []
	W1014 19:49:29.969220  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:49:29.969225  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:49:29.969275  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:49:29.997694  443658 cri.go:89] found id: ""
	I1014 19:49:29.997715  443658 logs.go:282] 0 containers: []
	W1014 19:49:29.997725  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:49:29.997732  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:49:29.997818  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:49:30.027488  443658 cri.go:89] found id: ""
	I1014 19:49:30.027506  443658 logs.go:282] 0 containers: []
	W1014 19:49:30.027514  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:49:30.027518  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:49:30.027568  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:49:30.054599  443658 cri.go:89] found id: ""
	I1014 19:49:30.054617  443658 logs.go:282] 0 containers: []
	W1014 19:49:30.054625  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:49:30.054630  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:49:30.054709  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:49:30.081817  443658 cri.go:89] found id: ""
	I1014 19:49:30.081833  443658 logs.go:282] 0 containers: []
	W1014 19:49:30.081843  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:49:30.081854  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:49:30.081870  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:49:30.145428  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:49:30.145454  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:49:30.177045  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:49:30.177064  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:49:30.244236  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:49:30.244263  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:49:30.262247  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:49:30.262268  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:49:30.320401  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:49:30.313011   10727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:30.313520   10727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:30.315086   10727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:30.315515   10727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:30.317170   10727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:49:30.313011   10727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:30.313520   10727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:30.315086   10727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:30.315515   10727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:30.317170   10727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:49:32.822227  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:49:32.833616  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:49:32.833715  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:49:32.861467  443658 cri.go:89] found id: ""
	I1014 19:49:32.861484  443658 logs.go:282] 0 containers: []
	W1014 19:49:32.861493  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:49:32.861499  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:49:32.861567  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:49:32.889541  443658 cri.go:89] found id: ""
	I1014 19:49:32.889559  443658 logs.go:282] 0 containers: []
	W1014 19:49:32.889566  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:49:32.889571  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:49:32.889616  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:49:32.915877  443658 cri.go:89] found id: ""
	I1014 19:49:32.915896  443658 logs.go:282] 0 containers: []
	W1014 19:49:32.915904  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:49:32.915908  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:49:32.915969  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:49:32.943538  443658 cri.go:89] found id: ""
	I1014 19:49:32.943558  443658 logs.go:282] 0 containers: []
	W1014 19:49:32.943568  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:49:32.943573  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:49:32.943635  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:49:32.969493  443658 cri.go:89] found id: ""
	I1014 19:49:32.969511  443658 logs.go:282] 0 containers: []
	W1014 19:49:32.969518  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:49:32.969523  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:49:32.969581  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:49:32.995650  443658 cri.go:89] found id: ""
	I1014 19:49:32.995671  443658 logs.go:282] 0 containers: []
	W1014 19:49:32.995679  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:49:32.995684  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:49:32.995765  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:49:33.023836  443658 cri.go:89] found id: ""
	I1014 19:49:33.023856  443658 logs.go:282] 0 containers: []
	W1014 19:49:33.023866  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:49:33.023876  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:49:33.023889  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:49:33.054135  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:49:33.054157  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:49:33.120594  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:49:33.120618  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:49:33.138783  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:49:33.138803  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:49:33.197459  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:49:33.189973   10842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:33.190463   10842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:33.192089   10842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:33.192508   10842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:33.194210   10842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:49:33.189973   10842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:33.190463   10842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:33.192089   10842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:33.192508   10842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:33.194210   10842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:49:33.197473  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:49:33.197483  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:49:35.763533  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:49:35.775555  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:49:35.775604  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:49:35.802773  443658 cri.go:89] found id: ""
	I1014 19:49:35.802794  443658 logs.go:282] 0 containers: []
	W1014 19:49:35.802800  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:49:35.802805  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:49:35.802853  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:49:35.830466  443658 cri.go:89] found id: ""
	I1014 19:49:35.830481  443658 logs.go:282] 0 containers: []
	W1014 19:49:35.830488  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:49:35.830499  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:49:35.830545  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:49:35.857322  443658 cri.go:89] found id: ""
	I1014 19:49:35.857342  443658 logs.go:282] 0 containers: []
	W1014 19:49:35.857350  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:49:35.857354  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:49:35.857407  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:49:35.884681  443658 cri.go:89] found id: ""
	I1014 19:49:35.884705  443658 logs.go:282] 0 containers: []
	W1014 19:49:35.884711  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:49:35.884717  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:49:35.884785  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:49:35.913187  443658 cri.go:89] found id: ""
	I1014 19:49:35.913205  443658 logs.go:282] 0 containers: []
	W1014 19:49:35.913212  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:49:35.913219  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:49:35.913284  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:49:35.941275  443658 cri.go:89] found id: ""
	I1014 19:49:35.941296  443658 logs.go:282] 0 containers: []
	W1014 19:49:35.941306  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:49:35.941312  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:49:35.941404  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:49:35.968221  443658 cri.go:89] found id: ""
	I1014 19:49:35.968242  443658 logs.go:282] 0 containers: []
	W1014 19:49:35.968249  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:49:35.968258  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:49:35.968269  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:49:35.997909  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:49:35.997926  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:49:36.065160  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:49:36.065186  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:49:36.084069  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:49:36.084094  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:49:36.143710  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:49:36.136552   10974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:36.137091   10974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:36.138749   10974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:36.139231   10974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:36.140429   10974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:49:36.136552   10974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:36.137091   10974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:36.138749   10974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:36.139231   10974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:36.140429   10974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:49:36.143728  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:49:36.143743  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:49:38.705714  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:49:38.717101  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:49:38.717153  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:49:38.743695  443658 cri.go:89] found id: ""
	I1014 19:49:38.743711  443658 logs.go:282] 0 containers: []
	W1014 19:49:38.743720  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:49:38.743725  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:49:38.743801  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:49:38.771046  443658 cri.go:89] found id: ""
	I1014 19:49:38.771062  443658 logs.go:282] 0 containers: []
	W1014 19:49:38.771069  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:49:38.771074  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:49:38.771120  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:49:38.798553  443658 cri.go:89] found id: ""
	I1014 19:49:38.798569  443658 logs.go:282] 0 containers: []
	W1014 19:49:38.798579  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:49:38.798585  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:49:38.798651  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:49:38.825740  443658 cri.go:89] found id: ""
	I1014 19:49:38.825773  443658 logs.go:282] 0 containers: []
	W1014 19:49:38.825784  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:49:38.825790  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:49:38.825842  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:49:38.852044  443658 cri.go:89] found id: ""
	I1014 19:49:38.852063  443658 logs.go:282] 0 containers: []
	W1014 19:49:38.852074  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:49:38.852081  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:49:38.852138  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:49:38.879494  443658 cri.go:89] found id: ""
	I1014 19:49:38.879511  443658 logs.go:282] 0 containers: []
	W1014 19:49:38.879519  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:49:38.879524  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:49:38.879572  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:49:38.908560  443658 cri.go:89] found id: ""
	I1014 19:49:38.908579  443658 logs.go:282] 0 containers: []
	W1014 19:49:38.908587  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:49:38.908597  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:49:38.908608  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:49:38.967381  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:49:38.960253   11084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:38.960835   11084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:38.962461   11084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:38.962872   11084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:38.964250   11084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:49:38.960253   11084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:38.960835   11084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:38.962461   11084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:38.962872   11084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:38.964250   11084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:49:38.967392  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:49:38.967407  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:49:39.029751  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:49:39.029782  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:49:39.060387  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:49:39.060407  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:49:39.131578  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:49:39.131603  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:49:41.650879  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:49:41.662649  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:49:41.662714  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:49:41.690616  443658 cri.go:89] found id: ""
	I1014 19:49:41.690632  443658 logs.go:282] 0 containers: []
	W1014 19:49:41.690639  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:49:41.690644  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:49:41.690726  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:49:41.717290  443658 cri.go:89] found id: ""
	I1014 19:49:41.717307  443658 logs.go:282] 0 containers: []
	W1014 19:49:41.717315  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:49:41.717319  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:49:41.717370  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:49:41.744219  443658 cri.go:89] found id: ""
	I1014 19:49:41.744235  443658 logs.go:282] 0 containers: []
	W1014 19:49:41.744242  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:49:41.744247  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:49:41.744291  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:49:41.771856  443658 cri.go:89] found id: ""
	I1014 19:49:41.771874  443658 logs.go:282] 0 containers: []
	W1014 19:49:41.771881  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:49:41.771886  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:49:41.771933  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:49:41.798980  443658 cri.go:89] found id: ""
	I1014 19:49:41.798997  443658 logs.go:282] 0 containers: []
	W1014 19:49:41.799008  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:49:41.799014  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:49:41.799082  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:49:41.824815  443658 cri.go:89] found id: ""
	I1014 19:49:41.824833  443658 logs.go:282] 0 containers: []
	W1014 19:49:41.824841  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:49:41.824847  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:49:41.824910  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:49:41.853352  443658 cri.go:89] found id: ""
	I1014 19:49:41.853369  443658 logs.go:282] 0 containers: []
	W1014 19:49:41.853377  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:49:41.853385  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:49:41.853397  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:49:41.871201  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:49:41.871221  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:49:41.931818  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:49:41.924117   11199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:41.924656   11199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:41.926161   11199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:41.926706   11199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:41.928205   11199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:49:41.924117   11199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:41.924656   11199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:41.926161   11199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:41.926706   11199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:41.928205   11199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:49:41.931829  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:49:41.931839  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:49:41.997739  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:49:41.997769  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:49:42.030107  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:49:42.030126  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:49:44.596638  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:49:44.608335  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:49:44.608403  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:49:44.636505  443658 cri.go:89] found id: ""
	I1014 19:49:44.636523  443658 logs.go:282] 0 containers: []
	W1014 19:49:44.636530  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:49:44.636535  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:49:44.636592  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:49:44.663068  443658 cri.go:89] found id: ""
	I1014 19:49:44.663085  443658 logs.go:282] 0 containers: []
	W1014 19:49:44.663091  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:49:44.663097  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:49:44.663156  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:49:44.691243  443658 cri.go:89] found id: ""
	I1014 19:49:44.691259  443658 logs.go:282] 0 containers: []
	W1014 19:49:44.691265  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:49:44.691270  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:49:44.691329  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:49:44.718866  443658 cri.go:89] found id: ""
	I1014 19:49:44.718889  443658 logs.go:282] 0 containers: []
	W1014 19:49:44.718900  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:49:44.718907  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:49:44.718964  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:49:44.746897  443658 cri.go:89] found id: ""
	I1014 19:49:44.746918  443658 logs.go:282] 0 containers: []
	W1014 19:49:44.746926  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:49:44.746930  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:49:44.746982  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:49:44.775031  443658 cri.go:89] found id: ""
	I1014 19:49:44.775049  443658 logs.go:282] 0 containers: []
	W1014 19:49:44.775058  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:49:44.775065  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:49:44.775134  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:49:44.803293  443658 cri.go:89] found id: ""
	I1014 19:49:44.803309  443658 logs.go:282] 0 containers: []
	W1014 19:49:44.803317  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:49:44.803326  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:49:44.803340  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:49:44.875474  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:49:44.875500  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:49:44.894197  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:49:44.894221  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:49:44.953777  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:49:44.946510   11323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:44.947021   11323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:44.948628   11323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:44.949193   11323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:44.950677   11323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:49:44.946510   11323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:44.947021   11323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:44.948628   11323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:44.949193   11323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:44.950677   11323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:49:44.953793  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:49:44.953807  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:49:45.014704  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:49:45.014730  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:49:47.548453  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:49:47.559665  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:49:47.559718  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:49:47.585634  443658 cri.go:89] found id: ""
	I1014 19:49:47.585654  443658 logs.go:282] 0 containers: []
	W1014 19:49:47.585664  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:49:47.585671  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:49:47.585770  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:49:47.613859  443658 cri.go:89] found id: ""
	I1014 19:49:47.613878  443658 logs.go:282] 0 containers: []
	W1014 19:49:47.613888  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:49:47.613894  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:49:47.613973  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:49:47.644468  443658 cri.go:89] found id: ""
	I1014 19:49:47.644489  443658 logs.go:282] 0 containers: []
	W1014 19:49:47.644498  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:49:47.644504  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:49:47.644577  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:49:47.673671  443658 cri.go:89] found id: ""
	I1014 19:49:47.673689  443658 logs.go:282] 0 containers: []
	W1014 19:49:47.673700  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:49:47.673708  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:49:47.673794  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:49:47.702597  443658 cri.go:89] found id: ""
	I1014 19:49:47.702613  443658 logs.go:282] 0 containers: []
	W1014 19:49:47.702621  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:49:47.702626  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:49:47.702687  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:49:47.729519  443658 cri.go:89] found id: ""
	I1014 19:49:47.729535  443658 logs.go:282] 0 containers: []
	W1014 19:49:47.729542  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:49:47.729546  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:49:47.729594  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:49:47.757807  443658 cri.go:89] found id: ""
	I1014 19:49:47.757824  443658 logs.go:282] 0 containers: []
	W1014 19:49:47.757831  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:49:47.757839  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:49:47.757853  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:49:47.829770  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:49:47.829807  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:49:47.848287  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:49:47.848311  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:49:47.906512  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:49:47.898946   11456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:47.899539   11456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:47.901229   11456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:47.901705   11456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:47.903277   11456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:49:47.898946   11456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:47.899539   11456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:47.901229   11456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:47.901705   11456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:47.903277   11456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:49:47.906525  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:49:47.906537  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:49:47.971102  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:49:47.971128  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:49:50.502817  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:49:50.514425  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:49:50.514473  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:49:50.541600  443658 cri.go:89] found id: ""
	I1014 19:49:50.541620  443658 logs.go:282] 0 containers: []
	W1014 19:49:50.541631  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:49:50.541637  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:49:50.541689  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:49:50.569005  443658 cri.go:89] found id: ""
	I1014 19:49:50.569032  443658 logs.go:282] 0 containers: []
	W1014 19:49:50.569041  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:49:50.569049  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:49:50.569121  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:49:50.597051  443658 cri.go:89] found id: ""
	I1014 19:49:50.597068  443658 logs.go:282] 0 containers: []
	W1014 19:49:50.597075  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:49:50.597079  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:49:50.597137  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:49:50.626382  443658 cri.go:89] found id: ""
	I1014 19:49:50.626405  443658 logs.go:282] 0 containers: []
	W1014 19:49:50.626412  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:49:50.626419  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:49:50.626473  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:49:50.654979  443658 cri.go:89] found id: ""
	I1014 19:49:50.654996  443658 logs.go:282] 0 containers: []
	W1014 19:49:50.655004  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:49:50.655008  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:49:50.655078  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:49:50.683528  443658 cri.go:89] found id: ""
	I1014 19:49:50.683548  443658 logs.go:282] 0 containers: []
	W1014 19:49:50.683558  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:49:50.683565  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:49:50.683618  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:49:50.711499  443658 cri.go:89] found id: ""
	I1014 19:49:50.711517  443658 logs.go:282] 0 containers: []
	W1014 19:49:50.711527  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:49:50.711537  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:49:50.711549  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:49:50.778199  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:49:50.778225  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:49:50.796226  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:49:50.796248  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:49:50.854616  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:49:50.846701   11581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:50.848209   11581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:50.848680   11581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:50.850246   11581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:50.850635   11581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:49:50.846701   11581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:50.848209   11581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:50.848680   11581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:50.850246   11581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:50.850635   11581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:49:50.854631  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:49:50.854643  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:49:50.918886  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:49:50.918914  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:49:53.451878  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:49:53.463151  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:49:53.463203  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:49:53.489474  443658 cri.go:89] found id: ""
	I1014 19:49:53.489490  443658 logs.go:282] 0 containers: []
	W1014 19:49:53.489499  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:49:53.489506  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:49:53.489568  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:49:53.516620  443658 cri.go:89] found id: ""
	I1014 19:49:53.516638  443658 logs.go:282] 0 containers: []
	W1014 19:49:53.516649  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:49:53.516656  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:49:53.516712  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:49:53.543251  443658 cri.go:89] found id: ""
	I1014 19:49:53.543270  443658 logs.go:282] 0 containers: []
	W1014 19:49:53.543281  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:49:53.543287  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:49:53.543354  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:49:53.570736  443658 cri.go:89] found id: ""
	I1014 19:49:53.570769  443658 logs.go:282] 0 containers: []
	W1014 19:49:53.570779  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:49:53.570786  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:49:53.570840  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:49:53.598355  443658 cri.go:89] found id: ""
	I1014 19:49:53.598372  443658 logs.go:282] 0 containers: []
	W1014 19:49:53.598381  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:49:53.598387  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:49:53.598450  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:49:53.625505  443658 cri.go:89] found id: ""
	I1014 19:49:53.625524  443658 logs.go:282] 0 containers: []
	W1014 19:49:53.625535  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:49:53.625542  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:49:53.625592  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:49:53.654789  443658 cri.go:89] found id: ""
	I1014 19:49:53.654808  443658 logs.go:282] 0 containers: []
	W1014 19:49:53.654815  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:49:53.654823  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:49:53.654839  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:49:53.726281  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:49:53.726306  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:49:53.744456  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:49:53.744480  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:49:53.804344  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:49:53.796970   11702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:53.797615   11702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:53.799272   11702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:53.799836   11702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:53.800930   11702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:49:53.796970   11702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:53.797615   11702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:53.799272   11702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:53.799836   11702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:53.800930   11702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:49:53.804365  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:49:53.804378  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:49:53.864148  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:49:53.864174  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:49:56.397395  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:49:56.408940  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:49:56.408994  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:49:56.436261  443658 cri.go:89] found id: ""
	I1014 19:49:56.436277  443658 logs.go:282] 0 containers: []
	W1014 19:49:56.436284  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:49:56.436291  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:49:56.436343  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:49:56.464497  443658 cri.go:89] found id: ""
	I1014 19:49:56.464514  443658 logs.go:282] 0 containers: []
	W1014 19:49:56.464523  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:49:56.464529  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:49:56.464584  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:49:56.492551  443658 cri.go:89] found id: ""
	I1014 19:49:56.492573  443658 logs.go:282] 0 containers: []
	W1014 19:49:56.492580  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:49:56.492585  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:49:56.492634  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:49:56.519631  443658 cri.go:89] found id: ""
	I1014 19:49:56.519650  443658 logs.go:282] 0 containers: []
	W1014 19:49:56.519661  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:49:56.519667  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:49:56.519716  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:49:56.545245  443658 cri.go:89] found id: ""
	I1014 19:49:56.545262  443658 logs.go:282] 0 containers: []
	W1014 19:49:56.545269  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:49:56.545274  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:49:56.545322  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:49:56.572677  443658 cri.go:89] found id: ""
	I1014 19:49:56.572700  443658 logs.go:282] 0 containers: []
	W1014 19:49:56.572711  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:49:56.572718  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:49:56.572795  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:49:56.601136  443658 cri.go:89] found id: ""
	I1014 19:49:56.601156  443658 logs.go:282] 0 containers: []
	W1014 19:49:56.601167  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:49:56.601178  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:49:56.601192  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:49:56.666034  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:49:56.666060  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:49:56.698200  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:49:56.698222  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:49:56.767958  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:49:56.767983  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:49:56.786835  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:49:56.786860  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:49:56.845436  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:49:56.837911   11844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:56.838400   11844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:56.840026   11844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:56.840573   11844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:56.842214   11844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:49:56.837911   11844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:56.838400   11844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:56.840026   11844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:56.840573   11844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:56.842214   11844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:49:59.347179  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:49:59.358660  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:49:59.358711  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:49:59.387000  443658 cri.go:89] found id: ""
	I1014 19:49:59.387027  443658 logs.go:282] 0 containers: []
	W1014 19:49:59.387034  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:49:59.387040  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:49:59.387088  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:49:59.414823  443658 cri.go:89] found id: ""
	I1014 19:49:59.414840  443658 logs.go:282] 0 containers: []
	W1014 19:49:59.414847  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:49:59.414852  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:49:59.414912  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:49:59.442607  443658 cri.go:89] found id: ""
	I1014 19:49:59.442624  443658 logs.go:282] 0 containers: []
	W1014 19:49:59.442631  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:49:59.442636  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:49:59.442696  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:49:59.471821  443658 cri.go:89] found id: ""
	I1014 19:49:59.471846  443658 logs.go:282] 0 containers: []
	W1014 19:49:59.471856  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:49:59.471864  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:49:59.471937  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:49:59.498236  443658 cri.go:89] found id: ""
	I1014 19:49:59.498256  443658 logs.go:282] 0 containers: []
	W1014 19:49:59.498263  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:49:59.498268  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:49:59.498316  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:49:59.525020  443658 cri.go:89] found id: ""
	I1014 19:49:59.525039  443658 logs.go:282] 0 containers: []
	W1014 19:49:59.525046  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:49:59.525051  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:49:59.525101  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:49:59.551137  443658 cri.go:89] found id: ""
	I1014 19:49:59.551157  443658 logs.go:282] 0 containers: []
	W1014 19:49:59.551167  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:49:59.551180  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:49:59.551192  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:49:59.622834  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:49:59.622862  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:49:59.641369  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:49:59.641392  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:49:59.701545  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:49:59.694218   11954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:59.694838   11954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:59.696377   11954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:59.696859   11954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:59.698400   11954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:49:59.694218   11954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:59.694838   11954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:59.696377   11954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:59.696859   11954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:49:59.698400   11954 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:49:59.701565  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:49:59.701623  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:49:59.765745  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:49:59.765773  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:50:02.298114  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:50:02.309805  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:50:02.309861  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:50:02.337973  443658 cri.go:89] found id: ""
	I1014 19:50:02.337989  443658 logs.go:282] 0 containers: []
	W1014 19:50:02.337996  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:50:02.338001  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:50:02.338069  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:50:02.366907  443658 cri.go:89] found id: ""
	I1014 19:50:02.366925  443658 logs.go:282] 0 containers: []
	W1014 19:50:02.366933  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:50:02.366938  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:50:02.366996  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:50:02.394409  443658 cri.go:89] found id: ""
	I1014 19:50:02.394427  443658 logs.go:282] 0 containers: []
	W1014 19:50:02.394437  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:50:02.394445  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:50:02.394507  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:50:02.423803  443658 cri.go:89] found id: ""
	I1014 19:50:02.423825  443658 logs.go:282] 0 containers: []
	W1014 19:50:02.423835  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:50:02.423841  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:50:02.423894  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:50:02.453316  443658 cri.go:89] found id: ""
	I1014 19:50:02.453346  443658 logs.go:282] 0 containers: []
	W1014 19:50:02.453357  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:50:02.453363  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:50:02.453429  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:50:02.480872  443658 cri.go:89] found id: ""
	I1014 19:50:02.480901  443658 logs.go:282] 0 containers: []
	W1014 19:50:02.480911  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:50:02.480917  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:50:02.480981  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:50:02.508491  443658 cri.go:89] found id: ""
	I1014 19:50:02.508513  443658 logs.go:282] 0 containers: []
	W1014 19:50:02.508520  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:50:02.508530  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:50:02.508545  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:50:02.538904  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:50:02.538926  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:50:02.604250  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:50:02.604276  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:50:02.624221  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:50:02.624244  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:50:02.686637  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:50:02.678751   12105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:02.679376   12105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:02.681040   12105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:02.681562   12105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:02.683182   12105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:50:02.678751   12105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:02.679376   12105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:02.681040   12105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:02.681562   12105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:02.683182   12105 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:50:02.686653  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:50:02.686670  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:50:05.248160  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:50:05.259486  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:50:05.259543  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:50:05.287245  443658 cri.go:89] found id: ""
	I1014 19:50:05.287266  443658 logs.go:282] 0 containers: []
	W1014 19:50:05.287277  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:50:05.287283  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:50:05.287337  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:50:05.316262  443658 cri.go:89] found id: ""
	I1014 19:50:05.316281  443658 logs.go:282] 0 containers: []
	W1014 19:50:05.316292  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:50:05.316298  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:50:05.316357  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:50:05.345733  443658 cri.go:89] found id: ""
	I1014 19:50:05.345767  443658 logs.go:282] 0 containers: []
	W1014 19:50:05.345779  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:50:05.345786  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:50:05.345842  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:50:05.373802  443658 cri.go:89] found id: ""
	I1014 19:50:05.373821  443658 logs.go:282] 0 containers: []
	W1014 19:50:05.373832  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:50:05.373840  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:50:05.373907  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:50:05.401831  443658 cri.go:89] found id: ""
	I1014 19:50:05.401849  443658 logs.go:282] 0 containers: []
	W1014 19:50:05.401856  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:50:05.401861  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:50:05.401915  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:50:05.430126  443658 cri.go:89] found id: ""
	I1014 19:50:05.430148  443658 logs.go:282] 0 containers: []
	W1014 19:50:05.430160  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:50:05.430167  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:50:05.430238  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:50:05.459121  443658 cri.go:89] found id: ""
	I1014 19:50:05.459139  443658 logs.go:282] 0 containers: []
	W1014 19:50:05.459146  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:50:05.459154  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:50:05.459166  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:50:05.519744  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:50:05.512669   12206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:05.513219   12206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:05.514764   12206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:05.515265   12206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:05.516363   12206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:50:05.512669   12206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:05.513219   12206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:05.514764   12206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:05.515265   12206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:05.516363   12206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:50:05.519777  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:50:05.519791  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:50:05.584599  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:50:05.584627  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:50:05.617086  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:50:05.617104  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:50:05.684896  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:50:05.684924  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:50:08.207248  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:50:08.218426  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:50:08.218487  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:50:08.245002  443658 cri.go:89] found id: ""
	I1014 19:50:08.245023  443658 logs.go:282] 0 containers: []
	W1014 19:50:08.245032  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:50:08.245038  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:50:08.245101  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:50:08.273388  443658 cri.go:89] found id: ""
	I1014 19:50:08.273404  443658 logs.go:282] 0 containers: []
	W1014 19:50:08.273411  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:50:08.273415  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:50:08.273470  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:50:08.301943  443658 cri.go:89] found id: ""
	I1014 19:50:08.301959  443658 logs.go:282] 0 containers: []
	W1014 19:50:08.301966  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:50:08.301971  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:50:08.302030  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:50:08.328569  443658 cri.go:89] found id: ""
	I1014 19:50:08.328587  443658 logs.go:282] 0 containers: []
	W1014 19:50:08.328594  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:50:08.328599  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:50:08.328649  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:50:08.356010  443658 cri.go:89] found id: ""
	I1014 19:50:08.356028  443658 logs.go:282] 0 containers: []
	W1014 19:50:08.356036  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:50:08.356042  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:50:08.356095  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:50:08.383392  443658 cri.go:89] found id: ""
	I1014 19:50:08.383407  443658 logs.go:282] 0 containers: []
	W1014 19:50:08.383414  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:50:08.383419  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:50:08.383469  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:50:08.410636  443658 cri.go:89] found id: ""
	I1014 19:50:08.410653  443658 logs.go:282] 0 containers: []
	W1014 19:50:08.410659  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:50:08.410667  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:50:08.410679  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:50:08.441110  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:50:08.441129  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:50:08.506036  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:50:08.506060  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:50:08.524075  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:50:08.524094  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:50:08.583708  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:50:08.576429   12348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:08.576973   12348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:08.578510   12348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:08.579066   12348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:08.580610   12348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:50:08.576429   12348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:08.576973   12348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:08.578510   12348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:08.579066   12348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:08.580610   12348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:50:08.583720  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:50:08.583740  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:50:11.145672  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:50:11.157553  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:50:11.157615  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:50:11.186767  443658 cri.go:89] found id: ""
	I1014 19:50:11.186787  443658 logs.go:282] 0 containers: []
	W1014 19:50:11.186794  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:50:11.186799  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:50:11.186858  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:50:11.216248  443658 cri.go:89] found id: ""
	I1014 19:50:11.216265  443658 logs.go:282] 0 containers: []
	W1014 19:50:11.216273  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:50:11.216278  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:50:11.216326  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:50:11.244352  443658 cri.go:89] found id: ""
	I1014 19:50:11.244375  443658 logs.go:282] 0 containers: []
	W1014 19:50:11.244384  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:50:11.244390  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:50:11.244457  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:50:11.271891  443658 cri.go:89] found id: ""
	I1014 19:50:11.271908  443658 logs.go:282] 0 containers: []
	W1014 19:50:11.271915  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:50:11.271920  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:50:11.271973  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:50:11.300619  443658 cri.go:89] found id: ""
	I1014 19:50:11.300635  443658 logs.go:282] 0 containers: []
	W1014 19:50:11.300642  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:50:11.300647  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:50:11.300724  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:50:11.327778  443658 cri.go:89] found id: ""
	I1014 19:50:11.327797  443658 logs.go:282] 0 containers: []
	W1014 19:50:11.327804  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:50:11.327809  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:50:11.327856  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:50:11.356398  443658 cri.go:89] found id: ""
	I1014 19:50:11.356416  443658 logs.go:282] 0 containers: []
	W1014 19:50:11.356425  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:50:11.356435  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:50:11.356448  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:50:11.387147  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:50:11.387172  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:50:11.456903  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:50:11.456928  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:50:11.475336  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:50:11.475358  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:50:11.533524  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:50:11.526103   12467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:11.526626   12467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:11.528173   12467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:11.528651   12467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:11.530139   12467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:50:11.526103   12467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:11.526626   12467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:11.528173   12467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:11.528651   12467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:11.530139   12467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:50:11.533537  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:50:11.533549  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:50:14.099433  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:50:14.110822  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:50:14.110894  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:50:14.137081  443658 cri.go:89] found id: ""
	I1014 19:50:14.137099  443658 logs.go:282] 0 containers: []
	W1014 19:50:14.137108  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:50:14.137115  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:50:14.137180  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:50:14.165873  443658 cri.go:89] found id: ""
	I1014 19:50:14.165893  443658 logs.go:282] 0 containers: []
	W1014 19:50:14.165917  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:50:14.165924  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:50:14.165991  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:50:14.194062  443658 cri.go:89] found id: ""
	I1014 19:50:14.194082  443658 logs.go:282] 0 containers: []
	W1014 19:50:14.194091  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:50:14.194098  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:50:14.194163  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:50:14.222120  443658 cri.go:89] found id: ""
	I1014 19:50:14.222139  443658 logs.go:282] 0 containers: []
	W1014 19:50:14.222149  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:50:14.222156  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:50:14.222239  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:50:14.249411  443658 cri.go:89] found id: ""
	I1014 19:50:14.249430  443658 logs.go:282] 0 containers: []
	W1014 19:50:14.249439  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:50:14.249444  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:50:14.249517  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:50:14.276644  443658 cri.go:89] found id: ""
	I1014 19:50:14.276661  443658 logs.go:282] 0 containers: []
	W1014 19:50:14.276668  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:50:14.276673  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:50:14.276723  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:50:14.305269  443658 cri.go:89] found id: ""
	I1014 19:50:14.305287  443658 logs.go:282] 0 containers: []
	W1014 19:50:14.305297  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:50:14.305308  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:50:14.305323  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:50:14.335633  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:50:14.335650  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:50:14.407263  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:50:14.407297  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:50:14.425952  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:50:14.425975  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:50:14.484783  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:50:14.477581   12590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:14.478203   12590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:14.479661   12590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:14.480126   12590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:14.481572   12590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:50:14.477581   12590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:14.478203   12590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:14.479661   12590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:14.480126   12590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:14.481572   12590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:50:14.484800  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:50:14.484815  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:50:17.050537  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:50:17.062166  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:50:17.062228  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:50:17.089863  443658 cri.go:89] found id: ""
	I1014 19:50:17.089883  443658 logs.go:282] 0 containers: []
	W1014 19:50:17.089893  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:50:17.089900  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:50:17.089956  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:50:17.118126  443658 cri.go:89] found id: ""
	I1014 19:50:17.118146  443658 logs.go:282] 0 containers: []
	W1014 19:50:17.118153  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:50:17.118160  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:50:17.118211  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:50:17.145473  443658 cri.go:89] found id: ""
	I1014 19:50:17.145493  443658 logs.go:282] 0 containers: []
	W1014 19:50:17.145504  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:50:17.145511  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:50:17.145563  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:50:17.173278  443658 cri.go:89] found id: ""
	I1014 19:50:17.173297  443658 logs.go:282] 0 containers: []
	W1014 19:50:17.173305  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:50:17.173310  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:50:17.173364  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:50:17.200155  443658 cri.go:89] found id: ""
	I1014 19:50:17.200175  443658 logs.go:282] 0 containers: []
	W1014 19:50:17.200183  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:50:17.200189  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:50:17.200259  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:50:17.227022  443658 cri.go:89] found id: ""
	I1014 19:50:17.227039  443658 logs.go:282] 0 containers: []
	W1014 19:50:17.227046  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:50:17.227051  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:50:17.227097  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:50:17.252693  443658 cri.go:89] found id: ""
	I1014 19:50:17.252711  443658 logs.go:282] 0 containers: []
	W1014 19:50:17.252719  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:50:17.252730  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:50:17.252771  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:50:17.284340  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:50:17.284358  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:50:17.350087  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:50:17.350110  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:50:17.367795  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:50:17.367815  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:50:17.426270  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:50:17.419190   12729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:17.419650   12729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:17.421295   12729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:17.421842   12729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:17.423058   12729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:50:17.419190   12729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:17.419650   12729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:17.421295   12729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:17.421842   12729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:17.423058   12729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:50:17.426290  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:50:17.426300  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:50:19.990063  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:50:20.001404  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:50:20.001462  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:50:20.029335  443658 cri.go:89] found id: ""
	I1014 19:50:20.029356  443658 logs.go:282] 0 containers: []
	W1014 19:50:20.029365  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:50:20.029371  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:50:20.029418  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:50:20.056226  443658 cri.go:89] found id: ""
	I1014 19:50:20.056244  443658 logs.go:282] 0 containers: []
	W1014 19:50:20.056251  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:50:20.056256  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:50:20.056303  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:50:20.085632  443658 cri.go:89] found id: ""
	I1014 19:50:20.085651  443658 logs.go:282] 0 containers: []
	W1014 19:50:20.085666  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:50:20.085674  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:50:20.085738  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:50:20.113679  443658 cri.go:89] found id: ""
	I1014 19:50:20.113699  443658 logs.go:282] 0 containers: []
	W1014 19:50:20.113717  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:50:20.113723  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:50:20.113793  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:50:20.141622  443658 cri.go:89] found id: ""
	I1014 19:50:20.141640  443658 logs.go:282] 0 containers: []
	W1014 19:50:20.141647  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:50:20.141651  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:50:20.141733  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:50:20.170013  443658 cri.go:89] found id: ""
	I1014 19:50:20.170032  443658 logs.go:282] 0 containers: []
	W1014 19:50:20.170042  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:50:20.170049  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:50:20.170106  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:50:20.198748  443658 cri.go:89] found id: ""
	I1014 19:50:20.198785  443658 logs.go:282] 0 containers: []
	W1014 19:50:20.198795  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:50:20.198806  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:50:20.198818  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:50:20.216706  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:50:20.216728  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:50:20.275300  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:50:20.267702   12828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:20.268302   12828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:20.269917   12828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:20.270346   12828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:20.272061   12828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:50:20.267702   12828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:20.268302   12828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:20.269917   12828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:20.270346   12828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:20.272061   12828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:50:20.275316  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:50:20.275329  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:50:20.340712  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:50:20.340738  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:50:20.371777  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:50:20.371799  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:50:22.939903  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:50:22.951439  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:50:22.951487  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:50:22.978695  443658 cri.go:89] found id: ""
	I1014 19:50:22.978715  443658 logs.go:282] 0 containers: []
	W1014 19:50:22.978725  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:50:22.978732  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:50:22.978808  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:50:23.005937  443658 cri.go:89] found id: ""
	I1014 19:50:23.005959  443658 logs.go:282] 0 containers: []
	W1014 19:50:23.005971  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:50:23.005978  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:50:23.006032  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:50:23.032228  443658 cri.go:89] found id: ""
	I1014 19:50:23.032247  443658 logs.go:282] 0 containers: []
	W1014 19:50:23.032257  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:50:23.032264  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:50:23.032330  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:50:23.059407  443658 cri.go:89] found id: ""
	I1014 19:50:23.059424  443658 logs.go:282] 0 containers: []
	W1014 19:50:23.059436  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:50:23.059450  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:50:23.059503  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:50:23.087490  443658 cri.go:89] found id: ""
	I1014 19:50:23.087508  443658 logs.go:282] 0 containers: []
	W1014 19:50:23.087518  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:50:23.087524  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:50:23.087588  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:50:23.116625  443658 cri.go:89] found id: ""
	I1014 19:50:23.116642  443658 logs.go:282] 0 containers: []
	W1014 19:50:23.116649  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:50:23.116654  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:50:23.116699  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:50:23.145362  443658 cri.go:89] found id: ""
	I1014 19:50:23.145379  443658 logs.go:282] 0 containers: []
	W1014 19:50:23.145388  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:50:23.145399  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:50:23.145410  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:50:23.210392  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:50:23.210420  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:50:23.242258  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:50:23.242277  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:50:23.309159  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:50:23.309186  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:50:23.327723  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:50:23.327744  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:50:23.386750  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:50:23.379457   12965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:23.380034   12965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:23.381688   12965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:23.382198   12965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:23.383449   12965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:50:23.379457   12965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:23.380034   12965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:23.381688   12965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:23.382198   12965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:23.383449   12965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:50:25.887778  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:50:25.899287  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:50:25.899359  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:50:25.928125  443658 cri.go:89] found id: ""
	I1014 19:50:25.928146  443658 logs.go:282] 0 containers: []
	W1014 19:50:25.928156  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:50:25.928162  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:50:25.928212  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:50:25.957045  443658 cri.go:89] found id: ""
	I1014 19:50:25.957061  443658 logs.go:282] 0 containers: []
	W1014 19:50:25.957068  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:50:25.957073  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:50:25.957126  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:50:25.984205  443658 cri.go:89] found id: ""
	I1014 19:50:25.984228  443658 logs.go:282] 0 containers: []
	W1014 19:50:25.984237  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:50:25.984243  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:50:25.984289  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:50:26.012054  443658 cri.go:89] found id: ""
	I1014 19:50:26.012071  443658 logs.go:282] 0 containers: []
	W1014 19:50:26.012078  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:50:26.012082  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:50:26.012128  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:50:26.040304  443658 cri.go:89] found id: ""
	I1014 19:50:26.040321  443658 logs.go:282] 0 containers: []
	W1014 19:50:26.040328  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:50:26.040332  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:50:26.040392  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:50:26.066676  443658 cri.go:89] found id: ""
	I1014 19:50:26.066696  443658 logs.go:282] 0 containers: []
	W1014 19:50:26.066705  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:50:26.066712  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:50:26.066787  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:50:26.094653  443658 cri.go:89] found id: ""
	I1014 19:50:26.094674  443658 logs.go:282] 0 containers: []
	W1014 19:50:26.094684  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:50:26.094693  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:50:26.094704  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:50:26.124447  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:50:26.124465  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:50:26.195983  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:50:26.196006  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:50:26.214895  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:50:26.214917  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:50:26.275196  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:50:26.267636   13087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:26.268258   13087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:26.269963   13087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:26.270471   13087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:26.272090   13087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:50:26.267636   13087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:26.268258   13087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:26.269963   13087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:26.270471   13087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:26.272090   13087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:50:26.275208  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:50:26.275223  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:50:28.837202  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:50:28.848579  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:50:28.848634  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:50:28.875162  443658 cri.go:89] found id: ""
	I1014 19:50:28.875182  443658 logs.go:282] 0 containers: []
	W1014 19:50:28.875194  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:50:28.875200  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:50:28.875254  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:50:28.903438  443658 cri.go:89] found id: ""
	I1014 19:50:28.903455  443658 logs.go:282] 0 containers: []
	W1014 19:50:28.903462  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:50:28.903467  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:50:28.903520  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:50:28.931290  443658 cri.go:89] found id: ""
	I1014 19:50:28.931307  443658 logs.go:282] 0 containers: []
	W1014 19:50:28.931314  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:50:28.931319  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:50:28.931365  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:50:28.958813  443658 cri.go:89] found id: ""
	I1014 19:50:28.958831  443658 logs.go:282] 0 containers: []
	W1014 19:50:28.958838  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:50:28.958843  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:50:28.958894  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:50:28.984686  443658 cri.go:89] found id: ""
	I1014 19:50:28.984704  443658 logs.go:282] 0 containers: []
	W1014 19:50:28.984711  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:50:28.984718  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:50:28.984783  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:50:29.012142  443658 cri.go:89] found id: ""
	I1014 19:50:29.012161  443658 logs.go:282] 0 containers: []
	W1014 19:50:29.012172  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:50:29.012183  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:50:29.012238  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:50:29.038850  443658 cri.go:89] found id: ""
	I1014 19:50:29.038870  443658 logs.go:282] 0 containers: []
	W1014 19:50:29.038880  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:50:29.038891  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:50:29.038902  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:50:29.069928  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:50:29.069967  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:50:29.138190  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:50:29.138214  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:50:29.156875  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:50:29.156904  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:50:29.216410  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:50:29.208955   13215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:29.209524   13215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:29.211285   13215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:29.211710   13215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:29.213259   13215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:50:29.208955   13215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:29.209524   13215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:29.211285   13215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:29.211710   13215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:29.213259   13215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:50:29.216425  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:50:29.216442  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:50:31.781917  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:50:31.793447  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:50:31.793505  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:50:31.821136  443658 cri.go:89] found id: ""
	I1014 19:50:31.821153  443658 logs.go:282] 0 containers: []
	W1014 19:50:31.821160  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:50:31.821165  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:50:31.821214  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:50:31.849490  443658 cri.go:89] found id: ""
	I1014 19:50:31.849508  443658 logs.go:282] 0 containers: []
	W1014 19:50:31.849515  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:50:31.849520  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:50:31.849573  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:50:31.876743  443658 cri.go:89] found id: ""
	I1014 19:50:31.876777  443658 logs.go:282] 0 containers: []
	W1014 19:50:31.876785  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:50:31.876790  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:50:31.876842  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:50:31.905558  443658 cri.go:89] found id: ""
	I1014 19:50:31.905576  443658 logs.go:282] 0 containers: []
	W1014 19:50:31.905584  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:50:31.905591  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:50:31.905654  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:50:31.934155  443658 cri.go:89] found id: ""
	I1014 19:50:31.934174  443658 logs.go:282] 0 containers: []
	W1014 19:50:31.934185  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:50:31.934191  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:50:31.934252  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:50:31.961840  443658 cri.go:89] found id: ""
	I1014 19:50:31.961857  443658 logs.go:282] 0 containers: []
	W1014 19:50:31.961870  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:50:31.961875  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:50:31.961924  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:50:31.989285  443658 cri.go:89] found id: ""
	I1014 19:50:31.989306  443658 logs.go:282] 0 containers: []
	W1014 19:50:31.989317  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:50:31.989330  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:50:31.989341  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:50:32.061358  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:50:32.061382  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:50:32.080223  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:50:32.080243  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:50:32.142648  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:50:32.134637   13329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:32.135263   13329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:32.137075   13329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:32.137669   13329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:32.139334   13329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:50:32.134637   13329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:32.135263   13329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:32.137075   13329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:32.137669   13329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:32.139334   13329 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:50:32.142684  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:50:32.142699  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:50:32.209500  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:50:32.209528  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:50:34.742153  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:50:34.753291  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:50:34.753345  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:50:34.781021  443658 cri.go:89] found id: ""
	I1014 19:50:34.781038  443658 logs.go:282] 0 containers: []
	W1014 19:50:34.781045  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:50:34.781050  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:50:34.781097  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:50:34.807324  443658 cri.go:89] found id: ""
	I1014 19:50:34.807341  443658 logs.go:282] 0 containers: []
	W1014 19:50:34.807349  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:50:34.807354  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:50:34.807402  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:50:34.834727  443658 cri.go:89] found id: ""
	I1014 19:50:34.834748  443658 logs.go:282] 0 containers: []
	W1014 19:50:34.834771  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:50:34.834778  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:50:34.834833  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:50:34.861999  443658 cri.go:89] found id: ""
	I1014 19:50:34.862019  443658 logs.go:282] 0 containers: []
	W1014 19:50:34.862031  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:50:34.862037  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:50:34.862087  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:50:34.889667  443658 cri.go:89] found id: ""
	I1014 19:50:34.889684  443658 logs.go:282] 0 containers: []
	W1014 19:50:34.889690  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:50:34.889694  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:50:34.889742  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:50:34.916811  443658 cri.go:89] found id: ""
	I1014 19:50:34.916828  443658 logs.go:282] 0 containers: []
	W1014 19:50:34.916834  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:50:34.916840  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:50:34.916899  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:50:34.944926  443658 cri.go:89] found id: ""
	I1014 19:50:34.944943  443658 logs.go:282] 0 containers: []
	W1014 19:50:34.944951  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:50:34.944959  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:50:34.944973  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:50:35.013004  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:50:35.013029  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:50:35.030877  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:50:35.030903  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:50:35.089384  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:50:35.081483   13442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:35.082170   13442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:35.083809   13442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:35.084270   13442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:35.085889   13442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:50:35.081483   13442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:35.082170   13442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:35.083809   13442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:35.084270   13442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:35.085889   13442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:50:35.089398  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:50:35.089409  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:50:35.149874  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:50:35.149899  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:50:37.684070  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:50:37.695415  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:50:37.695469  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:50:37.723582  443658 cri.go:89] found id: ""
	I1014 19:50:37.723598  443658 logs.go:282] 0 containers: []
	W1014 19:50:37.723605  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:50:37.723611  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:50:37.723688  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:50:37.751328  443658 cri.go:89] found id: ""
	I1014 19:50:37.751347  443658 logs.go:282] 0 containers: []
	W1014 19:50:37.751354  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:50:37.751363  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:50:37.751410  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:50:37.779279  443658 cri.go:89] found id: ""
	I1014 19:50:37.779300  443658 logs.go:282] 0 containers: []
	W1014 19:50:37.779311  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:50:37.779317  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:50:37.779392  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:50:37.806937  443658 cri.go:89] found id: ""
	I1014 19:50:37.806954  443658 logs.go:282] 0 containers: []
	W1014 19:50:37.806974  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:50:37.806979  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:50:37.807028  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:50:37.834418  443658 cri.go:89] found id: ""
	I1014 19:50:37.834435  443658 logs.go:282] 0 containers: []
	W1014 19:50:37.834442  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:50:37.834447  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:50:37.834495  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:50:37.861687  443658 cri.go:89] found id: ""
	I1014 19:50:37.861705  443658 logs.go:282] 0 containers: []
	W1014 19:50:37.861712  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:50:37.861719  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:50:37.861791  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:50:37.889605  443658 cri.go:89] found id: ""
	I1014 19:50:37.889622  443658 logs.go:282] 0 containers: []
	W1014 19:50:37.889628  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:50:37.889637  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:50:37.889648  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:50:37.954899  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:50:37.954928  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:50:37.988108  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:50:37.988128  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:50:38.058132  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:50:38.058158  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:50:38.076773  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:50:38.076795  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:50:38.135957  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:50:38.127889   13576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:38.128350   13576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:38.130577   13576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:38.131078   13576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:38.132629   13576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:50:38.127889   13576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:38.128350   13576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:38.130577   13576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:38.131078   13576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:38.132629   13576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:50:40.636752  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:50:40.647999  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:50:40.648055  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:50:40.674081  443658 cri.go:89] found id: ""
	I1014 19:50:40.674099  443658 logs.go:282] 0 containers: []
	W1014 19:50:40.674107  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:50:40.674112  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:50:40.674160  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:50:40.701160  443658 cri.go:89] found id: ""
	I1014 19:50:40.701177  443658 logs.go:282] 0 containers: []
	W1014 19:50:40.701184  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:50:40.701189  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:50:40.701252  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:50:40.728441  443658 cri.go:89] found id: ""
	I1014 19:50:40.728462  443658 logs.go:282] 0 containers: []
	W1014 19:50:40.728472  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:50:40.728480  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:50:40.728527  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:50:40.756302  443658 cri.go:89] found id: ""
	I1014 19:50:40.756318  443658 logs.go:282] 0 containers: []
	W1014 19:50:40.756325  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:50:40.756330  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:50:40.756375  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:50:40.782665  443658 cri.go:89] found id: ""
	I1014 19:50:40.782682  443658 logs.go:282] 0 containers: []
	W1014 19:50:40.782721  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:50:40.782727  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:50:40.782808  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:50:40.809993  443658 cri.go:89] found id: ""
	I1014 19:50:40.810011  443658 logs.go:282] 0 containers: []
	W1014 19:50:40.810017  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:50:40.810022  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:50:40.810081  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:50:40.837750  443658 cri.go:89] found id: ""
	I1014 19:50:40.837785  443658 logs.go:282] 0 containers: []
	W1014 19:50:40.837795  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:50:40.837805  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:50:40.837816  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:50:40.905565  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:50:40.905598  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:50:40.923794  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:50:40.923817  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:50:40.982479  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:50:40.975467   13681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:40.976110   13681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:40.977609   13681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:40.978094   13681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:40.979129   13681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:50:40.975467   13681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:40.976110   13681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:40.977609   13681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:40.978094   13681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:40.979129   13681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:50:40.982490  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:50:40.982503  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:50:41.043844  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:50:41.043869  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:50:43.575810  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:50:43.587076  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:50:43.587126  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:50:43.613973  443658 cri.go:89] found id: ""
	I1014 19:50:43.613992  443658 logs.go:282] 0 containers: []
	W1014 19:50:43.614001  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:50:43.614007  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:50:43.614062  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:50:43.641631  443658 cri.go:89] found id: ""
	I1014 19:50:43.641649  443658 logs.go:282] 0 containers: []
	W1014 19:50:43.641655  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:50:43.641662  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:50:43.641740  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:50:43.668838  443658 cri.go:89] found id: ""
	I1014 19:50:43.668853  443658 logs.go:282] 0 containers: []
	W1014 19:50:43.668860  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:50:43.668865  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:50:43.668912  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:50:43.696427  443658 cri.go:89] found id: ""
	I1014 19:50:43.696447  443658 logs.go:282] 0 containers: []
	W1014 19:50:43.696457  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:50:43.696464  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:50:43.696515  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:50:43.723629  443658 cri.go:89] found id: ""
	I1014 19:50:43.723646  443658 logs.go:282] 0 containers: []
	W1014 19:50:43.723652  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:50:43.723657  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:50:43.723738  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:50:43.750543  443658 cri.go:89] found id: ""
	I1014 19:50:43.750564  443658 logs.go:282] 0 containers: []
	W1014 19:50:43.750573  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:50:43.750579  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:50:43.750630  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:50:43.777077  443658 cri.go:89] found id: ""
	I1014 19:50:43.777094  443658 logs.go:282] 0 containers: []
	W1014 19:50:43.777100  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:50:43.777109  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:50:43.777123  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:50:43.847663  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:50:43.847745  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:50:43.865887  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:50:43.865906  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:50:43.924883  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:50:43.917622   13820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:43.918218   13820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:43.919830   13820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:43.920193   13820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:43.921570   13820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:50:43.917622   13820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:43.918218   13820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:43.919830   13820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:43.920193   13820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:43.921570   13820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:50:43.924899  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:50:43.924910  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:50:43.985909  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:50:43.985934  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:50:46.519152  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:50:46.530574  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:50:46.530626  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:50:46.557422  443658 cri.go:89] found id: ""
	I1014 19:50:46.557437  443658 logs.go:282] 0 containers: []
	W1014 19:50:46.557443  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:50:46.557448  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:50:46.557494  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:50:46.584670  443658 cri.go:89] found id: ""
	I1014 19:50:46.584690  443658 logs.go:282] 0 containers: []
	W1014 19:50:46.584699  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:50:46.584704  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:50:46.584777  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:50:46.611880  443658 cri.go:89] found id: ""
	I1014 19:50:46.611898  443658 logs.go:282] 0 containers: []
	W1014 19:50:46.611905  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:50:46.611912  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:50:46.611961  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:50:46.639343  443658 cri.go:89] found id: ""
	I1014 19:50:46.639358  443658 logs.go:282] 0 containers: []
	W1014 19:50:46.639365  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:50:46.639370  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:50:46.639420  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:50:46.667657  443658 cri.go:89] found id: ""
	I1014 19:50:46.667677  443658 logs.go:282] 0 containers: []
	W1014 19:50:46.667686  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:50:46.667693  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:50:46.667751  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:50:46.694195  443658 cri.go:89] found id: ""
	I1014 19:50:46.694218  443658 logs.go:282] 0 containers: []
	W1014 19:50:46.694228  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:50:46.694234  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:50:46.694288  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:50:46.723852  443658 cri.go:89] found id: ""
	I1014 19:50:46.723873  443658 logs.go:282] 0 containers: []
	W1014 19:50:46.723883  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:50:46.723893  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:50:46.723911  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:50:46.795594  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:50:46.795617  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:50:46.813986  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:50:46.814005  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:50:46.874107  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:50:46.866264   13938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:46.866806   13938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:46.868435   13938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:46.868992   13938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:46.870716   13938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:50:46.866264   13938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:46.866806   13938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:46.868435   13938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:46.868992   13938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:46.870716   13938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:50:46.874123  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:50:46.874137  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:50:46.939214  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:50:46.939239  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:50:49.472291  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:50:49.483645  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:50:49.483703  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:50:49.512485  443658 cri.go:89] found id: ""
	I1014 19:50:49.512508  443658 logs.go:282] 0 containers: []
	W1014 19:50:49.512519  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:50:49.512526  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:50:49.512579  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:50:49.541986  443658 cri.go:89] found id: ""
	I1014 19:50:49.542003  443658 logs.go:282] 0 containers: []
	W1014 19:50:49.542010  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:50:49.542015  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:50:49.542062  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:50:49.568820  443658 cri.go:89] found id: ""
	I1014 19:50:49.568837  443658 logs.go:282] 0 containers: []
	W1014 19:50:49.568843  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:50:49.568848  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:50:49.568904  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:50:49.595650  443658 cri.go:89] found id: ""
	I1014 19:50:49.595667  443658 logs.go:282] 0 containers: []
	W1014 19:50:49.595674  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:50:49.595679  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:50:49.595738  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:50:49.624580  443658 cri.go:89] found id: ""
	I1014 19:50:49.624597  443658 logs.go:282] 0 containers: []
	W1014 19:50:49.624604  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:50:49.624610  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:50:49.624668  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:50:49.651849  443658 cri.go:89] found id: ""
	I1014 19:50:49.651871  443658 logs.go:282] 0 containers: []
	W1014 19:50:49.651881  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:50:49.651888  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:50:49.651942  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:50:49.679343  443658 cri.go:89] found id: ""
	I1014 19:50:49.679361  443658 logs.go:282] 0 containers: []
	W1014 19:50:49.679369  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:50:49.679378  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:50:49.679390  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:50:49.710667  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:50:49.710688  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:50:49.779683  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:50:49.779708  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:50:49.797614  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:50:49.797632  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:50:49.858709  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:50:49.850102   14071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:49.850643   14071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:49.853179   14071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:49.853667   14071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:49.855254   14071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:50:49.850102   14071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:49.850643   14071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:49.853179   14071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:49.853667   14071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:49.855254   14071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:50:49.858721  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:50:49.858734  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:50:52.425201  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:50:52.437033  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:50:52.437091  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:50:52.464814  443658 cri.go:89] found id: ""
	I1014 19:50:52.464835  443658 logs.go:282] 0 containers: []
	W1014 19:50:52.464845  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:50:52.464852  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:50:52.464920  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:50:52.493108  443658 cri.go:89] found id: ""
	I1014 19:50:52.493128  443658 logs.go:282] 0 containers: []
	W1014 19:50:52.493141  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:50:52.493147  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:50:52.493206  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:50:52.520875  443658 cri.go:89] found id: ""
	I1014 19:50:52.520896  443658 logs.go:282] 0 containers: []
	W1014 19:50:52.520905  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:50:52.520912  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:50:52.520971  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:50:52.548477  443658 cri.go:89] found id: ""
	I1014 19:50:52.548496  443658 logs.go:282] 0 containers: []
	W1014 19:50:52.548503  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:50:52.548509  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:50:52.548571  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:50:52.576240  443658 cri.go:89] found id: ""
	I1014 19:50:52.576260  443658 logs.go:282] 0 containers: []
	W1014 19:50:52.576272  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:50:52.576278  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:50:52.576345  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:50:52.604501  443658 cri.go:89] found id: ""
	I1014 19:50:52.604519  443658 logs.go:282] 0 containers: []
	W1014 19:50:52.604529  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:50:52.604535  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:50:52.604605  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:50:52.636730  443658 cri.go:89] found id: ""
	I1014 19:50:52.636746  443658 logs.go:282] 0 containers: []
	W1014 19:50:52.636777  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:50:52.636789  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:50:52.636802  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:50:52.708243  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:50:52.708275  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:50:52.726867  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:50:52.726890  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:50:52.785730  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:50:52.778588   14184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:52.779176   14184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:52.780807   14184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:52.781257   14184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:52.782451   14184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:50:52.778588   14184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:52.779176   14184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:52.780807   14184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:52.781257   14184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:52.782451   14184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:50:52.785743  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:50:52.785783  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:50:52.849671  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:50:52.849695  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:50:55.381592  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:50:55.393025  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:50:55.393093  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:50:55.422130  443658 cri.go:89] found id: ""
	I1014 19:50:55.422150  443658 logs.go:282] 0 containers: []
	W1014 19:50:55.422159  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:50:55.422166  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:50:55.422225  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:50:55.449578  443658 cri.go:89] found id: ""
	I1014 19:50:55.449593  443658 logs.go:282] 0 containers: []
	W1014 19:50:55.449599  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:50:55.449606  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:50:55.449652  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:50:55.478330  443658 cri.go:89] found id: ""
	I1014 19:50:55.478349  443658 logs.go:282] 0 containers: []
	W1014 19:50:55.478359  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:50:55.478366  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:50:55.478418  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:50:55.506046  443658 cri.go:89] found id: ""
	I1014 19:50:55.506062  443658 logs.go:282] 0 containers: []
	W1014 19:50:55.506069  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:50:55.506075  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:50:55.506121  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:50:55.533431  443658 cri.go:89] found id: ""
	I1014 19:50:55.533448  443658 logs.go:282] 0 containers: []
	W1014 19:50:55.533460  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:50:55.533464  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:50:55.533512  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:50:55.559554  443658 cri.go:89] found id: ""
	I1014 19:50:55.559571  443658 logs.go:282] 0 containers: []
	W1014 19:50:55.559579  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:50:55.559583  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:50:55.559628  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:50:55.586490  443658 cri.go:89] found id: ""
	I1014 19:50:55.586506  443658 logs.go:282] 0 containers: []
	W1014 19:50:55.586513  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:50:55.586522  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:50:55.586533  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 19:50:55.654422  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:50:55.654447  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:50:55.673174  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:50:55.673195  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:50:55.732549  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:50:55.725166   14316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:55.725836   14316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:55.727380   14316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:55.727867   14316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:55.729272   14316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:50:55.725166   14316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:55.725836   14316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:55.727380   14316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:55.727867   14316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:50:55.729272   14316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:50:55.732565  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:50:55.732578  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:50:55.798718  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:50:55.798747  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:50:58.332284  443658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:50:58.343801  443658 kubeadm.go:601] duration metric: took 4m4.243920348s to restartPrimaryControlPlane
	W1014 19:50:58.343901  443658 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1014 19:50:58.344005  443658 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1014 19:50:58.799455  443658 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 19:50:58.813683  443658 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 19:50:58.822431  443658 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1014 19:50:58.822479  443658 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 19:50:58.830731  443658 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 19:50:58.830743  443658 kubeadm.go:157] found existing configuration files:
	
	I1014 19:50:58.830813  443658 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1014 19:50:58.838788  443658 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 19:50:58.838843  443658 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 19:50:58.846629  443658 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1014 19:50:58.854899  443658 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 19:50:58.854960  443658 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 19:50:58.862796  443658 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1014 19:50:58.870845  443658 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 19:50:58.870900  443658 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 19:50:58.878602  443658 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1014 19:50:58.886687  443658 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 19:50:58.886812  443658 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 19:50:58.894706  443658 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1014 19:50:58.956049  443658 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1014 19:50:59.017911  443658 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1014 19:55:01.512196  443658 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused]
	I1014 19:55:01.512300  443658 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1014 19:55:01.515811  443658 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1014 19:55:01.515863  443658 kubeadm.go:318] [preflight] Running pre-flight checks
	I1014 19:55:01.515937  443658 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1014 19:55:01.515981  443658 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1014 19:55:01.516011  443658 kubeadm.go:318] OS: Linux
	I1014 19:55:01.516049  443658 kubeadm.go:318] CGROUPS_CPU: enabled
	I1014 19:55:01.516087  443658 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1014 19:55:01.516133  443658 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1014 19:55:01.516172  443658 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1014 19:55:01.516210  443658 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1014 19:55:01.516249  443658 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1014 19:55:01.516288  443658 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1014 19:55:01.516322  443658 kubeadm.go:318] CGROUPS_IO: enabled
	I1014 19:55:01.516431  443658 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1014 19:55:01.516587  443658 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1014 19:55:01.516701  443658 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1014 19:55:01.516795  443658 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1014 19:55:01.519360  443658 out.go:252]   - Generating certificates and keys ...
	I1014 19:55:01.519469  443658 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1014 19:55:01.519557  443658 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1014 19:55:01.519666  443658 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1014 19:55:01.519744  443658 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1014 19:55:01.519850  443658 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1014 19:55:01.519914  443658 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1014 19:55:01.519978  443658 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1014 19:55:01.520034  443658 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1014 19:55:01.520097  443658 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1014 19:55:01.520167  443658 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1014 19:55:01.520203  443658 kubeadm.go:318] [certs] Using the existing "sa" key
	I1014 19:55:01.520251  443658 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1014 19:55:01.520299  443658 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1014 19:55:01.520348  443658 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1014 19:55:01.520393  443658 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1014 19:55:01.520450  443658 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1014 19:55:01.520499  443658 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1014 19:55:01.520576  443658 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1014 19:55:01.520641  443658 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1014 19:55:01.523229  443658 out.go:252]   - Booting up control plane ...
	I1014 19:55:01.523319  443658 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1014 19:55:01.523390  443658 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1014 19:55:01.523444  443658 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1014 19:55:01.523551  443658 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1014 19:55:01.523641  443658 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1014 19:55:01.523810  443658 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1014 19:55:01.523922  443658 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1014 19:55:01.523954  443658 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1014 19:55:01.524086  443658 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1014 19:55:01.524181  443658 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1014 19:55:01.524234  443658 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.568458ms
	I1014 19:55:01.524321  443658 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1014 19:55:01.524389  443658 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	I1014 19:55:01.524486  443658 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1014 19:55:01.524591  443658 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1014 19:55:01.524662  443658 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000296304s
	I1014 19:55:01.524728  443658 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000399838s
	I1014 19:55:01.524840  443658 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000393905s
	I1014 19:55:01.524843  443658 kubeadm.go:318] 
	I1014 19:55:01.524928  443658 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1014 19:55:01.525021  443658 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1014 19:55:01.525148  443658 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1014 19:55:01.525276  443658 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1014 19:55:01.525390  443658 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1014 19:55:01.525475  443658 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1014 19:55:01.525507  443658 kubeadm.go:318] 
	W1014 19:55:01.525679  443658 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.568458ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000296304s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000399838s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000393905s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1014 19:55:01.525798  443658 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1014 19:55:01.982887  443658 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 19:55:01.996173  443658 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1014 19:55:01.996227  443658 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 19:55:02.004750  443658 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 19:55:02.004776  443658 kubeadm.go:157] found existing configuration files:
	
	I1014 19:55:02.004817  443658 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1014 19:55:02.013003  443658 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 19:55:02.013070  443658 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 19:55:02.021099  443658 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1014 19:55:02.029431  443658 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 19:55:02.029492  443658 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 19:55:02.037121  443658 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1014 19:55:02.045152  443658 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 19:55:02.045198  443658 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 19:55:02.052887  443658 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1014 19:55:02.060584  443658 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 19:55:02.060626  443658 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 19:55:02.068308  443658 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1014 19:55:02.126727  443658 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1014 19:55:02.188353  443658 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1014 19:59:05.052390  443658 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1014 19:59:05.052568  443658 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1014 19:59:05.055525  443658 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1014 19:59:05.055579  443658 kubeadm.go:318] [preflight] Running pre-flight checks
	I1014 19:59:05.055669  443658 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1014 19:59:05.055719  443658 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1014 19:59:05.055746  443658 kubeadm.go:318] OS: Linux
	I1014 19:59:05.055802  443658 kubeadm.go:318] CGROUPS_CPU: enabled
	I1014 19:59:05.055840  443658 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1014 19:59:05.055878  443658 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1014 19:59:05.055926  443658 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1014 19:59:05.055963  443658 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1014 19:59:05.056004  443658 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1014 19:59:05.056049  443658 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1014 19:59:05.056084  443658 kubeadm.go:318] CGROUPS_IO: enabled
	I1014 19:59:05.056142  443658 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1014 19:59:05.056223  443658 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1014 19:59:05.056299  443658 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1014 19:59:05.056392  443658 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1014 19:59:05.059274  443658 out.go:252]   - Generating certificates and keys ...
	I1014 19:59:05.059351  443658 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1014 19:59:05.059415  443658 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1014 19:59:05.059493  443658 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1014 19:59:05.059567  443658 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1014 19:59:05.059629  443658 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1014 19:59:05.059672  443658 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1014 19:59:05.059751  443658 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1014 19:59:05.059826  443658 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1014 19:59:05.059887  443658 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1014 19:59:05.059966  443658 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1014 19:59:05.060015  443658 kubeadm.go:318] [certs] Using the existing "sa" key
	I1014 19:59:05.060080  443658 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1014 19:59:05.060144  443658 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1014 19:59:05.060195  443658 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1014 19:59:05.060238  443658 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1014 19:59:05.060288  443658 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1014 19:59:05.060337  443658 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1014 19:59:05.060403  443658 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1014 19:59:05.060483  443658 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1014 19:59:05.061914  443658 out.go:252]   - Booting up control plane ...
	I1014 19:59:05.062009  443658 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1014 19:59:05.062118  443658 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1014 19:59:05.062251  443658 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1014 19:59:05.062371  443658 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1014 19:59:05.062470  443658 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1014 19:59:05.062594  443658 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1014 19:59:05.062668  443658 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1014 19:59:05.062709  443658 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1014 19:59:05.062894  443658 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1014 19:59:05.063001  443658 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1014 19:59:05.063067  443658 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001430917s
	I1014 19:59:05.063161  443658 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1014 19:59:05.063245  443658 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	I1014 19:59:05.063317  443658 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1014 19:59:05.063385  443658 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1014 19:59:05.063443  443658 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000444633s
	I1014 19:59:05.063502  443658 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000481701s
	I1014 19:59:05.063588  443658 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000675884s
	I1014 19:59:05.063599  443658 kubeadm.go:318] 
	I1014 19:59:05.063715  443658 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1014 19:59:05.063820  443658 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1014 19:59:05.063899  443658 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1014 19:59:05.064013  443658 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1014 19:59:05.064087  443658 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1014 19:59:05.064169  443658 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1014 19:59:05.064205  443658 kubeadm.go:318] 
	I1014 19:59:05.064256  443658 kubeadm.go:402] duration metric: took 12m11.001770383s to StartCluster
	I1014 19:59:05.064319  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 19:59:05.064377  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 19:59:05.094590  443658 cri.go:89] found id: ""
	I1014 19:59:05.094608  443658 logs.go:282] 0 containers: []
	W1014 19:59:05.094615  443658 logs.go:284] No container was found matching "kube-apiserver"
	I1014 19:59:05.094620  443658 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 19:59:05.094695  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 19:59:05.123951  443658 cri.go:89] found id: ""
	I1014 19:59:05.123969  443658 logs.go:282] 0 containers: []
	W1014 19:59:05.123989  443658 logs.go:284] No container was found matching "etcd"
	I1014 19:59:05.123996  443658 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 19:59:05.124057  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 19:59:05.153788  443658 cri.go:89] found id: ""
	I1014 19:59:05.153806  443658 logs.go:282] 0 containers: []
	W1014 19:59:05.153813  443658 logs.go:284] No container was found matching "coredns"
	I1014 19:59:05.153818  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 19:59:05.153866  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 19:59:05.182209  443658 cri.go:89] found id: ""
	I1014 19:59:05.182227  443658 logs.go:282] 0 containers: []
	W1014 19:59:05.182233  443658 logs.go:284] No container was found matching "kube-scheduler"
	I1014 19:59:05.182239  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 19:59:05.182295  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 19:59:05.211682  443658 cri.go:89] found id: ""
	I1014 19:59:05.211743  443658 logs.go:282] 0 containers: []
	W1014 19:59:05.211773  443658 logs.go:284] No container was found matching "kube-proxy"
	I1014 19:59:05.211787  443658 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 19:59:05.211840  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 19:59:05.239904  443658 cri.go:89] found id: ""
	I1014 19:59:05.239927  443658 logs.go:282] 0 containers: []
	W1014 19:59:05.239935  443658 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 19:59:05.239942  443658 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 19:59:05.239993  443658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 19:59:05.266617  443658 cri.go:89] found id: ""
	I1014 19:59:05.266636  443658 logs.go:282] 0 containers: []
	W1014 19:59:05.266643  443658 logs.go:284] No container was found matching "kindnet"
	I1014 19:59:05.266710  443658 logs.go:123] Gathering logs for dmesg ...
	I1014 19:59:05.266747  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 19:59:05.284891  443658 logs.go:123] Gathering logs for describe nodes ...
	I1014 19:59:05.284919  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 19:59:05.345910  443658 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:59:05.338670   15637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:59:05.339278   15637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:59:05.340773   15637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:59:05.341189   15637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:59:05.342723   15637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 19:59:05.338670   15637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:59:05.339278   15637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:59:05.340773   15637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:59:05.341189   15637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:59:05.342723   15637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 19:59:05.345933  443658 logs.go:123] Gathering logs for CRI-O ...
	I1014 19:59:05.345953  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 19:59:05.410981  443658 logs.go:123] Gathering logs for container status ...
	I1014 19:59:05.411011  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 19:59:05.441593  443658 logs.go:123] Gathering logs for kubelet ...
	I1014 19:59:05.441611  443658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1014 19:59:05.511762  443658 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001430917s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000444633s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000481701s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000675884s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1014 19:59:05.511841  443658 out.go:285] * 
	W1014 19:59:05.511933  443658 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001430917s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000444633s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000481701s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000675884s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1014 19:59:05.511948  443658 out.go:285] * 
	W1014 19:59:05.513702  443658 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 19:59:05.517408  443658 out.go:203] 
	W1014 19:59:05.518938  443658 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001430917s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000444633s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000481701s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000675884s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1014 19:59:05.518965  443658 out.go:285] * 
	I1014 19:59:05.520443  443658 out.go:203] 
	
	
	==> CRI-O <==
	Oct 14 19:59:09 functional-744288 crio[5849]: time="2025-10-14T19:59:09.647391921Z" level=info msg="createCtr: removing container 6e2c2c4cb04a0ff330473aae999924576003bb30cc6d310e8d22ce70f7fdc315" id=0e5c004a-e99a-4ca5-82e4-160e1832f434 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:59:09 functional-744288 crio[5849]: time="2025-10-14T19:59:09.647463494Z" level=info msg="createCtr: deleting container 6e2c2c4cb04a0ff330473aae999924576003bb30cc6d310e8d22ce70f7fdc315 from storage" id=0e5c004a-e99a-4ca5-82e4-160e1832f434 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:59:09 functional-744288 crio[5849]: time="2025-10-14T19:59:09.650679605Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-functional-744288_kube-system_07f65d41bdafe0b0f1a2009eadad0a38_0" id=0e5c004a-e99a-4ca5-82e4-160e1832f434 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:59:11 functional-744288 crio[5849]: time="2025-10-14T19:59:11.611367475Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=036c7d74-f14a-4e37-bb50-6bb0624e5a1e name=/runtime.v1.ImageService/ImageStatus
	Oct 14 19:59:11 functional-744288 crio[5849]: time="2025-10-14T19:59:11.611466309Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=76283b85-fae3-4575-a63d-e9f1083700fd name=/runtime.v1.ImageService/ImageStatus
	Oct 14 19:59:11 functional-744288 crio[5849]: time="2025-10-14T19:59:11.612473494Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=6b09af6a-8521-4074-934a-fe4637b5d212 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 19:59:11 functional-744288 crio[5849]: time="2025-10-14T19:59:11.612574787Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=17ee2120-156c-4b7c-a568-480cee735a23 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 19:59:11 functional-744288 crio[5849]: time="2025-10-14T19:59:11.613538771Z" level=info msg="Creating container: kube-system/kube-apiserver-functional-744288/kube-apiserver" id=8b002ed2-7873-41d0-8072-fb8bb25b24b7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:59:11 functional-744288 crio[5849]: time="2025-10-14T19:59:11.613632658Z" level=info msg="Creating container: kube-system/kube-controller-manager-functional-744288/kube-controller-manager" id=26c7be0c-4575-4233-bef9-55b997ad3643 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:59:11 functional-744288 crio[5849]: time="2025-10-14T19:59:11.613813713Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 19:59:11 functional-744288 crio[5849]: time="2025-10-14T19:59:11.613827311Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 19:59:11 functional-744288 crio[5849]: time="2025-10-14T19:59:11.62134908Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 19:59:11 functional-744288 crio[5849]: time="2025-10-14T19:59:11.62198193Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 19:59:11 functional-744288 crio[5849]: time="2025-10-14T19:59:11.623398407Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 19:59:11 functional-744288 crio[5849]: time="2025-10-14T19:59:11.624010925Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 19:59:11 functional-744288 crio[5849]: time="2025-10-14T19:59:11.643915388Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=8b002ed2-7873-41d0-8072-fb8bb25b24b7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:59:11 functional-744288 crio[5849]: time="2025-10-14T19:59:11.64664635Z" level=info msg="createCtr: deleting container ID 0198668cf8f9b17cdd9059614e22cfde53f0d3a3687c1f1676b902ee917ecd91 from idIndex" id=8b002ed2-7873-41d0-8072-fb8bb25b24b7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:59:11 functional-744288 crio[5849]: time="2025-10-14T19:59:11.646840424Z" level=info msg="createCtr: removing container 0198668cf8f9b17cdd9059614e22cfde53f0d3a3687c1f1676b902ee917ecd91" id=8b002ed2-7873-41d0-8072-fb8bb25b24b7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:59:11 functional-744288 crio[5849]: time="2025-10-14T19:59:11.646898107Z" level=info msg="createCtr: deleting container 0198668cf8f9b17cdd9059614e22cfde53f0d3a3687c1f1676b902ee917ecd91 from storage" id=8b002ed2-7873-41d0-8072-fb8bb25b24b7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:59:11 functional-744288 crio[5849]: time="2025-10-14T19:59:11.64748926Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=26c7be0c-4575-4233-bef9-55b997ad3643 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:59:11 functional-744288 crio[5849]: time="2025-10-14T19:59:11.649668552Z" level=info msg="createCtr: deleting container ID d97313d22756522c38c8755736a8aca9ed9ebf892661c6746b1f01eb12c01ba2 from idIndex" id=26c7be0c-4575-4233-bef9-55b997ad3643 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:59:11 functional-744288 crio[5849]: time="2025-10-14T19:59:11.649712371Z" level=info msg="createCtr: removing container d97313d22756522c38c8755736a8aca9ed9ebf892661c6746b1f01eb12c01ba2" id=26c7be0c-4575-4233-bef9-55b997ad3643 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:59:11 functional-744288 crio[5849]: time="2025-10-14T19:59:11.649865828Z" level=info msg="createCtr: deleting container d97313d22756522c38c8755736a8aca9ed9ebf892661c6746b1f01eb12c01ba2 from storage" id=26c7be0c-4575-4233-bef9-55b997ad3643 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:59:11 functional-744288 crio[5849]: time="2025-10-14T19:59:11.652478939Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-functional-744288_kube-system_5ce31098ce493b77069c880f0c6ac8e6_0" id=8b002ed2-7873-41d0-8072-fb8bb25b24b7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:59:11 functional-744288 crio[5849]: time="2025-10-14T19:59:11.652817187Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-functional-744288_kube-system_b1fd55382fcf5a735f17d7c6c4ddad91_0" id=26c7be0c-4575-4233-bef9-55b997ad3643 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:59:13.962330   16585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:59:13.963208   16585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:59:13.964486   16585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:59:13.965287   16585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:59:13.966947   16585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 91 42 a8 46 f5 08 06
	[ +33.952077] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff aa 17 60 38 7c 63 08 06
	[  +0.961658] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ba 77 81 98 05 a1 08 06
	[  +0.043334] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 8a 2f 57 55 4c d7 08 06
	[  +4.681081] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f6 ac 51 a4 b2 5a 08 06
	[Oct14 19:04] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e6 d1 de e1 85 25 08 06
	[  +0.899784] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ae f2 fe 51 3d 95 08 06
	[  +0.040749] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 82 c6 36 89 b8 4a 08 06
	[  +4.162603] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c6 62 d5 5c 0e ef 08 06
	[ +30.801371] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 06 ae e5 28 c7 39 08 06
	[  +0.817897] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ae 5e 5e 90 62 1a 08 06
	[  +0.046959] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 96 8f 1b bd b9 9f 08 06
	[Oct14 19:05] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 56 59 c3 7c db c4 08 06
	
	
	==> kernel <==
	 19:59:14 up  2:41,  0 user,  load average: 0.68, 0.23, 1.12
	Linux functional-744288 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 14 19:59:09 functional-744288 kubelet[15039]: E1014 19:59:09.651200   15039 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 14 19:59:09 functional-744288 kubelet[15039]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 14 19:59:09 functional-744288 kubelet[15039]:  > podSandboxID="ee064544bd4e42055c680fa907a0270df315e93c45e8bb6818d6d53626d20a55"
	Oct 14 19:59:09 functional-744288 kubelet[15039]: E1014 19:59:09.651317   15039 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 14 19:59:09 functional-744288 kubelet[15039]:         container etcd start failed in pod etcd-functional-744288_kube-system(07f65d41bdafe0b0f1a2009eadad0a38): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 14 19:59:09 functional-744288 kubelet[15039]:  > logger="UnhandledError"
	Oct 14 19:59:09 functional-744288 kubelet[15039]: E1014 19:59:09.651361   15039 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-functional-744288" podUID="07f65d41bdafe0b0f1a2009eadad0a38"
	Oct 14 19:59:10 functional-744288 kubelet[15039]: E1014 19:59:10.031269   15039 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-744288&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	Oct 14 19:59:10 functional-744288 kubelet[15039]: E1014 19:59:10.439574   15039 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://192.168.49.2:8441/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass"
	Oct 14 19:59:11 functional-744288 kubelet[15039]: E1014 19:59:11.610788   15039 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-744288\" not found" node="functional-744288"
	Oct 14 19:59:11 functional-744288 kubelet[15039]: E1014 19:59:11.610919   15039 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-744288\" not found" node="functional-744288"
	Oct 14 19:59:11 functional-744288 kubelet[15039]: E1014 19:59:11.652838   15039 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 14 19:59:11 functional-744288 kubelet[15039]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 14 19:59:11 functional-744288 kubelet[15039]:  > podSandboxID="3c54d3192ed1a94339d7aeaa1e4937313dec117490489404c0f549da6defb72e"
	Oct 14 19:59:11 functional-744288 kubelet[15039]: E1014 19:59:11.652950   15039 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 14 19:59:11 functional-744288 kubelet[15039]:         container kube-apiserver start failed in pod kube-apiserver-functional-744288_kube-system(5ce31098ce493b77069c880f0c6ac8e6): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 14 19:59:11 functional-744288 kubelet[15039]:  > logger="UnhandledError"
	Oct 14 19:59:11 functional-744288 kubelet[15039]: E1014 19:59:11.652998   15039 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-functional-744288" podUID="5ce31098ce493b77069c880f0c6ac8e6"
	Oct 14 19:59:11 functional-744288 kubelet[15039]: E1014 19:59:11.653087   15039 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 14 19:59:11 functional-744288 kubelet[15039]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 14 19:59:11 functional-744288 kubelet[15039]:  > podSandboxID="834c7ad581f3fcc6f5d04a9ecdd22e99efde1b20033a85c33ba33f7567fe39fc"
	Oct 14 19:59:11 functional-744288 kubelet[15039]: E1014 19:59:11.653125   15039 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 14 19:59:11 functional-744288 kubelet[15039]:         container kube-controller-manager start failed in pod kube-controller-manager-functional-744288_kube-system(b1fd55382fcf5a735f17d7c6c4ddad91): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 14 19:59:11 functional-744288 kubelet[15039]:  > logger="UnhandledError"
	Oct 14 19:59:11 functional-744288 kubelet[15039]: E1014 19:59:11.654224   15039 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-functional-744288" podUID="b1fd55382fcf5a735f17d7c6c4ddad91"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-744288 -n functional-744288
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-744288 -n functional-744288: exit status 2 (340.90701ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-744288" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/parallel/StatusCmd (3.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (1.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-744288 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1636: (dbg) Non-zero exit: kubectl --context functional-744288 create deployment hello-node-connect --image kicbase/echo-server: exit status 1 (60.442309ms)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://192.168.49.2:8441/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test.go:1638: failed to create hello-node deployment with this command "kubectl --context functional-744288 create deployment hello-node-connect --image kicbase/echo-server": exit status 1.
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-744288 describe po hello-node-connect
functional_test.go:1612: (dbg) Non-zero exit: kubectl --context functional-744288 describe po hello-node-connect: exit status 1 (61.555307ms)

                                                
                                                
** stderr ** 
	E1014 19:59:15.091131  460625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1014 19:59:15.091592  460625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1014 19:59:15.093333  460625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1014 19:59:15.093684  460625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1014 19:59:15.094907  460625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1614: "kubectl --context functional-744288 describe po hello-node-connect" failed: exit status 1
functional_test.go:1616: hello-node pod describe:
functional_test.go:1618: (dbg) Run:  kubectl --context functional-744288 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-744288 logs -l app=hello-node-connect: exit status 1 (67.606351ms)

                                                
                                                
** stderr ** 
	E1014 19:59:15.158420  460634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1014 19:59:15.159038  460634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1014 19:59:15.160523  460634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1014 19:59:15.160910  460634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-744288 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-744288 describe svc hello-node-connect
functional_test.go:1624: (dbg) Non-zero exit: kubectl --context functional-744288 describe svc hello-node-connect: exit status 1 (62.150333ms)

                                                
                                                
** stderr ** 
	E1014 19:59:15.221505  460649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1014 19:59:15.221971  460649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1014 19:59:15.223411  460649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1014 19:59:15.223738  460649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1014 19:59:15.225115  460649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1626: "kubectl --context functional-744288 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1628: hello-node svc describe:
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-744288
helpers_test.go:243: (dbg) docker inspect functional-744288:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ca55a7aa3ac1a055c5c96669019f558795a313505e680734901ed4465642a23a",
	        "Created": "2025-10-14T19:32:11.700856501Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 432350,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-14T19:32:11.736879267Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/ca55a7aa3ac1a055c5c96669019f558795a313505e680734901ed4465642a23a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ca55a7aa3ac1a055c5c96669019f558795a313505e680734901ed4465642a23a/hostname",
	        "HostsPath": "/var/lib/docker/containers/ca55a7aa3ac1a055c5c96669019f558795a313505e680734901ed4465642a23a/hosts",
	        "LogPath": "/var/lib/docker/containers/ca55a7aa3ac1a055c5c96669019f558795a313505e680734901ed4465642a23a/ca55a7aa3ac1a055c5c96669019f558795a313505e680734901ed4465642a23a-json.log",
	        "Name": "/functional-744288",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-744288:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-744288",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ca55a7aa3ac1a055c5c96669019f558795a313505e680734901ed4465642a23a",
	                "LowerDir": "/var/lib/docker/overlay2/7474e2653fad8f96d0b2edb2947dba52f432e8f6324d88224718eafd377e5e8b-init/diff:/var/lib/docker/overlay2/51cca15559205c73df9571f03495ed8a29f085405673af5fef2c1ba7c695d8af/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7474e2653fad8f96d0b2edb2947dba52f432e8f6324d88224718eafd377e5e8b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7474e2653fad8f96d0b2edb2947dba52f432e8f6324d88224718eafd377e5e8b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7474e2653fad8f96d0b2edb2947dba52f432e8f6324d88224718eafd377e5e8b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-744288",
	                "Source": "/var/lib/docker/volumes/functional-744288/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-744288",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-744288",
	                "name.minikube.sigs.k8s.io": "functional-744288",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "697bcb3f8caffcc21274484944886a08e42751728789c447339536a0c178eab6",
	            "SandboxKey": "/var/run/docker/netns/697bcb3f8caf",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32898"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32899"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32902"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32900"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32901"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-744288": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b6:cd:04:2d:45:3c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a03a168f8787c5e4cc68b4fb2601645260a265eddce7ef101976f25cd32bdabb",
	                    "EndpointID": "c7c49faacc3b14ce4dd3f84ad70993167a6a8b1490739ae12d7ca1980d176ee7",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-744288",
	                        "ca55a7aa3ac1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-744288 -n functional-744288
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-744288 -n functional-744288: exit status 2 (331.089458ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-744288 logs -n 25
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                            ARGS                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cp      │ functional-744288 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                         │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ config  │ functional-744288 config unset cpus                                                                                        │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ service │ functional-744288 service list                                                                                             │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │                     │
	│ config  │ functional-744288 config get cpus                                                                                          │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │                     │
	│ config  │ functional-744288 config set cpus 2                                                                                        │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ config  │ functional-744288 config get cpus                                                                                          │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ config  │ functional-744288 config unset cpus                                                                                        │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ ssh     │ functional-744288 ssh -n functional-744288 sudo cat /home/docker/cp-test.txt                                               │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ config  │ functional-744288 config get cpus                                                                                          │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │                     │
	│ service │ functional-744288 service list -o json                                                                                     │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │                     │
	│ ssh     │ functional-744288 ssh echo hello                                                                                           │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ cp      │ functional-744288 cp functional-744288:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3238097529/001/cp-test.txt │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ service │ functional-744288 service --namespace=default --https --url hello-node                                                     │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │                     │
	│ ssh     │ functional-744288 ssh cat /etc/hostname                                                                                    │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ ssh     │ functional-744288 ssh -n functional-744288 sudo cat /home/docker/cp-test.txt                                               │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ service │ functional-744288 service hello-node --url --format={{.IP}}                                                                │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │                     │
	│ tunnel  │ functional-744288 tunnel --alsologtostderr                                                                                 │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │                     │
	│ tunnel  │ functional-744288 tunnel --alsologtostderr                                                                                 │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │                     │
	│ service │ functional-744288 service hello-node --url                                                                                 │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │                     │
	│ cp      │ functional-744288 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                  │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ tunnel  │ functional-744288 tunnel --alsologtostderr                                                                                 │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │                     │
	│ ssh     │ functional-744288 ssh -n functional-744288 sudo cat /tmp/does/not/exist/cp-test.txt                                        │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ addons  │ functional-744288 addons list                                                                                              │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ addons  │ functional-744288 addons list -o json                                                                                      │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ start   │ -p functional-744288 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio                  │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/14 19:59:15
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1014 19:59:15.560556  460836 out.go:360] Setting OutFile to fd 1 ...
	I1014 19:59:15.560662  460836 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 19:59:15.560675  460836 out.go:374] Setting ErrFile to fd 2...
	I1014 19:59:15.560681  460836 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 19:59:15.560898  460836 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-413763/.minikube/bin
	I1014 19:59:15.561468  460836 out.go:368] Setting JSON to false
	I1014 19:59:15.562541  460836 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":9702,"bootTime":1760462254,"procs":193,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1014 19:59:15.562651  460836 start.go:141] virtualization: kvm guest
	I1014 19:59:15.565020  460836 out.go:179] * [functional-744288] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1014 19:59:15.566580  460836 out.go:179]   - MINIKUBE_LOCATION=21409
	I1014 19:59:15.566624  460836 notify.go:220] Checking for updates...
	I1014 19:59:15.569260  460836 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 19:59:15.570873  460836 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-413763/kubeconfig
	I1014 19:59:15.575024  460836 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-413763/.minikube
	I1014 19:59:15.576588  460836 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1014 19:59:15.577970  460836 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	
	
	==> CRI-O <==
	Oct 14 19:59:09 functional-744288 crio[5849]: time="2025-10-14T19:59:09.647391921Z" level=info msg="createCtr: removing container 6e2c2c4cb04a0ff330473aae999924576003bb30cc6d310e8d22ce70f7fdc315" id=0e5c004a-e99a-4ca5-82e4-160e1832f434 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:59:09 functional-744288 crio[5849]: time="2025-10-14T19:59:09.647463494Z" level=info msg="createCtr: deleting container 6e2c2c4cb04a0ff330473aae999924576003bb30cc6d310e8d22ce70f7fdc315 from storage" id=0e5c004a-e99a-4ca5-82e4-160e1832f434 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:59:09 functional-744288 crio[5849]: time="2025-10-14T19:59:09.650679605Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-functional-744288_kube-system_07f65d41bdafe0b0f1a2009eadad0a38_0" id=0e5c004a-e99a-4ca5-82e4-160e1832f434 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:59:11 functional-744288 crio[5849]: time="2025-10-14T19:59:11.611367475Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=036c7d74-f14a-4e37-bb50-6bb0624e5a1e name=/runtime.v1.ImageService/ImageStatus
	Oct 14 19:59:11 functional-744288 crio[5849]: time="2025-10-14T19:59:11.611466309Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=76283b85-fae3-4575-a63d-e9f1083700fd name=/runtime.v1.ImageService/ImageStatus
	Oct 14 19:59:11 functional-744288 crio[5849]: time="2025-10-14T19:59:11.612473494Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=6b09af6a-8521-4074-934a-fe4637b5d212 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 19:59:11 functional-744288 crio[5849]: time="2025-10-14T19:59:11.612574787Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=17ee2120-156c-4b7c-a568-480cee735a23 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 19:59:11 functional-744288 crio[5849]: time="2025-10-14T19:59:11.613538771Z" level=info msg="Creating container: kube-system/kube-apiserver-functional-744288/kube-apiserver" id=8b002ed2-7873-41d0-8072-fb8bb25b24b7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:59:11 functional-744288 crio[5849]: time="2025-10-14T19:59:11.613632658Z" level=info msg="Creating container: kube-system/kube-controller-manager-functional-744288/kube-controller-manager" id=26c7be0c-4575-4233-bef9-55b997ad3643 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:59:11 functional-744288 crio[5849]: time="2025-10-14T19:59:11.613813713Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 19:59:11 functional-744288 crio[5849]: time="2025-10-14T19:59:11.613827311Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 19:59:11 functional-744288 crio[5849]: time="2025-10-14T19:59:11.62134908Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 19:59:11 functional-744288 crio[5849]: time="2025-10-14T19:59:11.62198193Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 19:59:11 functional-744288 crio[5849]: time="2025-10-14T19:59:11.623398407Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 19:59:11 functional-744288 crio[5849]: time="2025-10-14T19:59:11.624010925Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 19:59:11 functional-744288 crio[5849]: time="2025-10-14T19:59:11.643915388Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=8b002ed2-7873-41d0-8072-fb8bb25b24b7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:59:11 functional-744288 crio[5849]: time="2025-10-14T19:59:11.64664635Z" level=info msg="createCtr: deleting container ID 0198668cf8f9b17cdd9059614e22cfde53f0d3a3687c1f1676b902ee917ecd91 from idIndex" id=8b002ed2-7873-41d0-8072-fb8bb25b24b7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:59:11 functional-744288 crio[5849]: time="2025-10-14T19:59:11.646840424Z" level=info msg="createCtr: removing container 0198668cf8f9b17cdd9059614e22cfde53f0d3a3687c1f1676b902ee917ecd91" id=8b002ed2-7873-41d0-8072-fb8bb25b24b7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:59:11 functional-744288 crio[5849]: time="2025-10-14T19:59:11.646898107Z" level=info msg="createCtr: deleting container 0198668cf8f9b17cdd9059614e22cfde53f0d3a3687c1f1676b902ee917ecd91 from storage" id=8b002ed2-7873-41d0-8072-fb8bb25b24b7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:59:11 functional-744288 crio[5849]: time="2025-10-14T19:59:11.64748926Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=26c7be0c-4575-4233-bef9-55b997ad3643 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:59:11 functional-744288 crio[5849]: time="2025-10-14T19:59:11.649668552Z" level=info msg="createCtr: deleting container ID d97313d22756522c38c8755736a8aca9ed9ebf892661c6746b1f01eb12c01ba2 from idIndex" id=26c7be0c-4575-4233-bef9-55b997ad3643 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:59:11 functional-744288 crio[5849]: time="2025-10-14T19:59:11.649712371Z" level=info msg="createCtr: removing container d97313d22756522c38c8755736a8aca9ed9ebf892661c6746b1f01eb12c01ba2" id=26c7be0c-4575-4233-bef9-55b997ad3643 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:59:11 functional-744288 crio[5849]: time="2025-10-14T19:59:11.649865828Z" level=info msg="createCtr: deleting container d97313d22756522c38c8755736a8aca9ed9ebf892661c6746b1f01eb12c01ba2 from storage" id=26c7be0c-4575-4233-bef9-55b997ad3643 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:59:11 functional-744288 crio[5849]: time="2025-10-14T19:59:11.652478939Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-functional-744288_kube-system_5ce31098ce493b77069c880f0c6ac8e6_0" id=8b002ed2-7873-41d0-8072-fb8bb25b24b7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:59:11 functional-744288 crio[5849]: time="2025-10-14T19:59:11.652817187Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-functional-744288_kube-system_b1fd55382fcf5a735f17d7c6c4ddad91_0" id=26c7be0c-4575-4233-bef9-55b997ad3643 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:59:16.225203   16873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:59:16.225833   16873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:59:16.227476   16873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:59:16.227980   16873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:59:16.229504   16873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 91 42 a8 46 f5 08 06
	[ +33.952077] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff aa 17 60 38 7c 63 08 06
	[  +0.961658] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ba 77 81 98 05 a1 08 06
	[  +0.043334] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 8a 2f 57 55 4c d7 08 06
	[  +4.681081] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f6 ac 51 a4 b2 5a 08 06
	[Oct14 19:04] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e6 d1 de e1 85 25 08 06
	[  +0.899784] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ae f2 fe 51 3d 95 08 06
	[  +0.040749] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 82 c6 36 89 b8 4a 08 06
	[  +4.162603] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c6 62 d5 5c 0e ef 08 06
	[ +30.801371] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 06 ae e5 28 c7 39 08 06
	[  +0.817897] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ae 5e 5e 90 62 1a 08 06
	[  +0.046959] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 96 8f 1b bd b9 9f 08 06
	[Oct14 19:05] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 56 59 c3 7c db c4 08 06
	
	
	==> kernel <==
	 19:59:16 up  2:41,  0 user,  load average: 0.68, 0.23, 1.12
	Linux functional-744288 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 14 19:59:09 functional-744288 kubelet[15039]:  > logger="UnhandledError"
	Oct 14 19:59:09 functional-744288 kubelet[15039]: E1014 19:59:09.651361   15039 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-functional-744288" podUID="07f65d41bdafe0b0f1a2009eadad0a38"
	Oct 14 19:59:10 functional-744288 kubelet[15039]: E1014 19:59:10.031269   15039 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-744288&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	Oct 14 19:59:10 functional-744288 kubelet[15039]: E1014 19:59:10.439574   15039 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://192.168.49.2:8441/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass"
	Oct 14 19:59:11 functional-744288 kubelet[15039]: E1014 19:59:11.610788   15039 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-744288\" not found" node="functional-744288"
	Oct 14 19:59:11 functional-744288 kubelet[15039]: E1014 19:59:11.610919   15039 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-744288\" not found" node="functional-744288"
	Oct 14 19:59:11 functional-744288 kubelet[15039]: E1014 19:59:11.652838   15039 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 14 19:59:11 functional-744288 kubelet[15039]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 14 19:59:11 functional-744288 kubelet[15039]:  > podSandboxID="3c54d3192ed1a94339d7aeaa1e4937313dec117490489404c0f549da6defb72e"
	Oct 14 19:59:11 functional-744288 kubelet[15039]: E1014 19:59:11.652950   15039 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 14 19:59:11 functional-744288 kubelet[15039]:         container kube-apiserver start failed in pod kube-apiserver-functional-744288_kube-system(5ce31098ce493b77069c880f0c6ac8e6): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 14 19:59:11 functional-744288 kubelet[15039]:  > logger="UnhandledError"
	Oct 14 19:59:11 functional-744288 kubelet[15039]: E1014 19:59:11.652998   15039 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-functional-744288" podUID="5ce31098ce493b77069c880f0c6ac8e6"
	Oct 14 19:59:11 functional-744288 kubelet[15039]: E1014 19:59:11.653087   15039 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 14 19:59:11 functional-744288 kubelet[15039]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 14 19:59:11 functional-744288 kubelet[15039]:  > podSandboxID="834c7ad581f3fcc6f5d04a9ecdd22e99efde1b20033a85c33ba33f7567fe39fc"
	Oct 14 19:59:11 functional-744288 kubelet[15039]: E1014 19:59:11.653125   15039 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 14 19:59:11 functional-744288 kubelet[15039]:         container kube-controller-manager start failed in pod kube-controller-manager-functional-744288_kube-system(b1fd55382fcf5a735f17d7c6c4ddad91): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 14 19:59:11 functional-744288 kubelet[15039]:  > logger="UnhandledError"
	Oct 14 19:59:11 functional-744288 kubelet[15039]: E1014 19:59:11.654224   15039 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-functional-744288" podUID="b1fd55382fcf5a735f17d7c6c4ddad91"
	Oct 14 19:59:14 functional-744288 kubelet[15039]: E1014 19:59:14.624236   15039 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-744288\" not found"
	Oct 14 19:59:15 functional-744288 kubelet[15039]: E1014 19:59:15.236841   15039 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-744288?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Oct 14 19:59:15 functional-744288 kubelet[15039]: E1014 19:59:15.377190   15039 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8441/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-744288.186e73b01ddb1340  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-744288,UID:functional-744288,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-744288 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-744288,},FirstTimestamp:2025-10-14 19:55:04.600777536 +0000 UTC m=+0.555327311,LastTimestamp:2025-10-14 19:55:04.600777536 +0000 UTC m=+0.555327311,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-744288,}"
	Oct 14 19:59:15 functional-744288 kubelet[15039]: I1014 19:59:15.393702   15039 kubelet_node_status.go:75] "Attempting to register node" node="functional-744288"
	Oct 14 19:59:15 functional-744288 kubelet[15039]: E1014 19:59:15.394167   15039 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-744288"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-744288 -n functional-744288
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-744288 -n functional-744288: exit status 2 (320.607121ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-744288" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (1.64s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (241.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1014 19:59:14.658378  417373 retry.go:31] will retry after 3.590629516s: Temporary Error: Get "http:": http: no Host in request URL
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1014 19:59:29.447105  417373 retry.go:31] will retry after 10.074545595s: Temporary Error: Get "http:": http: no Host in request URL
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1014 19:59:39.521861  417373 retry.go:31] will retry after 12.721191657s: Temporary Error: Get "http:": http: no Host in request URL
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1014 19:59:52.243301  417373 retry.go:31] will retry after 41.379668537s: Temporary Error: Get "http:": http: no Host in request URL
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1014 20:00:33.623705  417373 retry.go:31] will retry after 35.361337319s: Temporary Error: Get "http:": http: no Host in request URL
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_pvc_test.go:50: ***** TestFunctional/parallel/PersistentVolumeClaim: pod "integration-test=storage-provisioner" failed to start within 4m0s: context deadline exceeded ****
functional_test_pvc_test.go:50: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-744288 -n functional-744288
functional_test_pvc_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-744288 -n functional-744288: exit status 2 (312.29536ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
functional_test_pvc_test.go:50: status error: exit status 2 (may be ok)
functional_test_pvc_test.go:50: "functional-744288" apiserver is not running, skipping kubectl commands (state="Stopped")
functional_test_pvc_test.go:51: failed waiting for storage-provisioner: integration-test=storage-provisioner within 4m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-744288
helpers_test.go:243: (dbg) docker inspect functional-744288:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ca55a7aa3ac1a055c5c96669019f558795a313505e680734901ed4465642a23a",
	        "Created": "2025-10-14T19:32:11.700856501Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 432350,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-14T19:32:11.736879267Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/ca55a7aa3ac1a055c5c96669019f558795a313505e680734901ed4465642a23a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ca55a7aa3ac1a055c5c96669019f558795a313505e680734901ed4465642a23a/hostname",
	        "HostsPath": "/var/lib/docker/containers/ca55a7aa3ac1a055c5c96669019f558795a313505e680734901ed4465642a23a/hosts",
	        "LogPath": "/var/lib/docker/containers/ca55a7aa3ac1a055c5c96669019f558795a313505e680734901ed4465642a23a/ca55a7aa3ac1a055c5c96669019f558795a313505e680734901ed4465642a23a-json.log",
	        "Name": "/functional-744288",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-744288:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-744288",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ca55a7aa3ac1a055c5c96669019f558795a313505e680734901ed4465642a23a",
	                "LowerDir": "/var/lib/docker/overlay2/7474e2653fad8f96d0b2edb2947dba52f432e8f6324d88224718eafd377e5e8b-init/diff:/var/lib/docker/overlay2/51cca15559205c73df9571f03495ed8a29f085405673af5fef2c1ba7c695d8af/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7474e2653fad8f96d0b2edb2947dba52f432e8f6324d88224718eafd377e5e8b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7474e2653fad8f96d0b2edb2947dba52f432e8f6324d88224718eafd377e5e8b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7474e2653fad8f96d0b2edb2947dba52f432e8f6324d88224718eafd377e5e8b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-744288",
	                "Source": "/var/lib/docker/volumes/functional-744288/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-744288",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-744288",
	                "name.minikube.sigs.k8s.io": "functional-744288",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "697bcb3f8caffcc21274484944886a08e42751728789c447339536a0c178eab6",
	            "SandboxKey": "/var/run/docker/netns/697bcb3f8caf",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32898"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32899"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32902"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32900"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32901"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-744288": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b6:cd:04:2d:45:3c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a03a168f8787c5e4cc68b4fb2601645260a265eddce7ef101976f25cd32bdabb",
	                    "EndpointID": "c7c49faacc3b14ce4dd3f84ad70993167a6a8b1490739ae12d7ca1980d176ee7",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-744288",
	                        "ca55a7aa3ac1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-744288 -n functional-744288
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-744288 -n functional-744288: exit status 2 (304.322442ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-744288 logs -n 25
helpers_test.go:260: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                              ARGS                                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-744288 ssh findmnt -T /mount3                                                                                                                        │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ image          │ functional-744288 image ls                                                                                                                                      │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ mount          │ -p functional-744288 --kill=true                                                                                                                                │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │                     │
	│ image          │ functional-744288 image save kicbase/echo-server:functional-744288 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ image          │ functional-744288 image rm kicbase/echo-server:functional-744288 --alsologtostderr                                                                              │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ image          │ functional-744288 image ls                                                                                                                                      │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ image          │ functional-744288 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr                                       │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ image          │ functional-744288 image save --daemon kicbase/echo-server:functional-744288 --alsologtostderr                                                                   │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ ssh            │ functional-744288 ssh sudo cat /etc/ssl/certs/417373.pem                                                                                                        │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ ssh            │ functional-744288 ssh sudo cat /etc/test/nested/copy/417373/hosts                                                                                               │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │                     │
	│ ssh            │ functional-744288 ssh sudo cat /usr/share/ca-certificates/417373.pem                                                                                            │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ ssh            │ functional-744288 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                        │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ ssh            │ functional-744288 ssh sudo cat /etc/ssl/certs/4173732.pem                                                                                                       │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ ssh            │ functional-744288 ssh sudo cat /usr/share/ca-certificates/4173732.pem                                                                                           │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ ssh            │ functional-744288 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                        │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ image          │ functional-744288 image ls --format short --alsologtostderr                                                                                                     │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ image          │ functional-744288 image ls --format json --alsologtostderr                                                                                                      │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ image          │ functional-744288 image ls --format table --alsologtostderr                                                                                                     │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ image          │ functional-744288 image ls --format yaml --alsologtostderr                                                                                                      │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ ssh            │ functional-744288 ssh pgrep buildkitd                                                                                                                           │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │                     │
	│ update-context │ functional-744288 update-context --alsologtostderr -v=2                                                                                                         │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ update-context │ functional-744288 update-context --alsologtostderr -v=2                                                                                                         │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ image          │ functional-744288 image build -t localhost/my-image:functional-744288 testdata/build --alsologtostderr                                                          │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ update-context │ functional-744288 update-context --alsologtostderr -v=2                                                                                                         │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ image          │ functional-744288 image ls                                                                                                                                      │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/14 19:59:15
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1014 19:59:15.984416  461207 out.go:360] Setting OutFile to fd 1 ...
	I1014 19:59:15.984586  461207 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 19:59:15.984597  461207 out.go:374] Setting ErrFile to fd 2...
	I1014 19:59:15.984604  461207 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 19:59:15.985010  461207 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-413763/.minikube/bin
	I1014 19:59:15.985511  461207 out.go:368] Setting JSON to false
	I1014 19:59:15.986502  461207 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":9702,"bootTime":1760462254,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1014 19:59:15.986600  461207 start.go:141] virtualization: kvm guest
	I1014 19:59:15.988840  461207 out.go:179] * [functional-744288] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1014 19:59:15.990551  461207 out.go:179]   - MINIKUBE_LOCATION=21409
	I1014 19:59:15.990567  461207 notify.go:220] Checking for updates...
	I1014 19:59:15.993365  461207 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 19:59:15.994948  461207 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-413763/kubeconfig
	I1014 19:59:15.997169  461207 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-413763/.minikube
	I1014 19:59:15.999150  461207 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1014 19:59:16.000873  461207 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 19:59:16.003345  461207 config.go:182] Loaded profile config "functional-744288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 19:59:16.004102  461207 driver.go:421] Setting default libvirt URI to qemu:///system
	I1014 19:59:16.029353  461207 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1014 19:59:16.029472  461207 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 19:59:16.097661  461207 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-14 19:59:16.086601927 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1014 19:59:16.097897  461207 docker.go:318] overlay module found
	I1014 19:59:16.099803  461207 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1014 19:59:16.101025  461207 start.go:305] selected driver: docker
	I1014 19:59:16.101045  461207 start.go:925] validating driver "docker" against &{Name:functional-744288 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-744288 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 19:59:16.101172  461207 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 19:59:16.103591  461207 out.go:203] 
	W1014 19:59:16.105109  461207 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1014 19:59:16.106244  461207 out.go:203] 
	
	
	==> CRI-O <==
	Oct 14 20:03:05 functional-744288 crio[5849]: time="2025-10-14T20:03:05.636718112Z" level=info msg="createCtr: removing container 94ccb402192e463d374bf22b07f1eedaac99c8d56fab88a9d47db3edeedcd174" id=6e266dff-dcb3-4775-90d9-83b3ddb383e5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:03:05 functional-744288 crio[5849]: time="2025-10-14T20:03:05.636776146Z" level=info msg="createCtr: deleting container 94ccb402192e463d374bf22b07f1eedaac99c8d56fab88a9d47db3edeedcd174 from storage" id=6e266dff-dcb3-4775-90d9-83b3ddb383e5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:03:05 functional-744288 crio[5849]: time="2025-10-14T20:03:05.638963586Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-functional-744288_kube-system_e9679524bf37cc2b727411d0e5a93bfe_0" id=6e266dff-dcb3-4775-90d9-83b3ddb383e5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:03:07 functional-744288 crio[5849]: time="2025-10-14T20:03:07.611244115Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=d9537250-b555-418b-980e-e5bbd9ddca85 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 20:03:07 functional-744288 crio[5849]: time="2025-10-14T20:03:07.612210831Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=caa9ff17-c3bf-479d-b7c7-92bdbb0bd585 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 20:03:07 functional-744288 crio[5849]: time="2025-10-14T20:03:07.613241564Z" level=info msg="Creating container: kube-system/kube-apiserver-functional-744288/kube-apiserver" id=833c0a50-51ef-46ec-959d-8fea804394b6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:03:07 functional-744288 crio[5849]: time="2025-10-14T20:03:07.613465258Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:03:07 functional-744288 crio[5849]: time="2025-10-14T20:03:07.61718752Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:03:07 functional-744288 crio[5849]: time="2025-10-14T20:03:07.617843262Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:03:07 functional-744288 crio[5849]: time="2025-10-14T20:03:07.632926332Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=833c0a50-51ef-46ec-959d-8fea804394b6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:03:07 functional-744288 crio[5849]: time="2025-10-14T20:03:07.634284575Z" level=info msg="createCtr: deleting container ID cf837663ee855684ff9a11632cdf203d4f5fe304aa016a63ec46c089cc38a72d from idIndex" id=833c0a50-51ef-46ec-959d-8fea804394b6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:03:07 functional-744288 crio[5849]: time="2025-10-14T20:03:07.634319095Z" level=info msg="createCtr: removing container cf837663ee855684ff9a11632cdf203d4f5fe304aa016a63ec46c089cc38a72d" id=833c0a50-51ef-46ec-959d-8fea804394b6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:03:07 functional-744288 crio[5849]: time="2025-10-14T20:03:07.634349184Z" level=info msg="createCtr: deleting container cf837663ee855684ff9a11632cdf203d4f5fe304aa016a63ec46c089cc38a72d from storage" id=833c0a50-51ef-46ec-959d-8fea804394b6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:03:07 functional-744288 crio[5849]: time="2025-10-14T20:03:07.637695397Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-functional-744288_kube-system_5ce31098ce493b77069c880f0c6ac8e6_0" id=833c0a50-51ef-46ec-959d-8fea804394b6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:03:10 functional-744288 crio[5849]: time="2025-10-14T20:03:10.611421262Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=de277c42-af35-42be-94ac-fbad1e645a94 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 20:03:10 functional-744288 crio[5849]: time="2025-10-14T20:03:10.612428068Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=1f78f2f3-aac7-4971-84f1-6bd95b7f19d5 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 20:03:10 functional-744288 crio[5849]: time="2025-10-14T20:03:10.613323354Z" level=info msg="Creating container: kube-system/etcd-functional-744288/etcd" id=775a3a0a-29be-4d95-a32d-fd003b44bfd5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:03:10 functional-744288 crio[5849]: time="2025-10-14T20:03:10.613535898Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:03:10 functional-744288 crio[5849]: time="2025-10-14T20:03:10.616570295Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:03:10 functional-744288 crio[5849]: time="2025-10-14T20:03:10.617000659Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:03:10 functional-744288 crio[5849]: time="2025-10-14T20:03:10.634319077Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=775a3a0a-29be-4d95-a32d-fd003b44bfd5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:03:10 functional-744288 crio[5849]: time="2025-10-14T20:03:10.635732741Z" level=info msg="createCtr: deleting container ID 83c8c021ed049b9dfb78625d1adaf3f0102d17d39e4636b8b55bdb7cee45fb27 from idIndex" id=775a3a0a-29be-4d95-a32d-fd003b44bfd5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:03:10 functional-744288 crio[5849]: time="2025-10-14T20:03:10.635788549Z" level=info msg="createCtr: removing container 83c8c021ed049b9dfb78625d1adaf3f0102d17d39e4636b8b55bdb7cee45fb27" id=775a3a0a-29be-4d95-a32d-fd003b44bfd5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:03:10 functional-744288 crio[5849]: time="2025-10-14T20:03:10.635826663Z" level=info msg="createCtr: deleting container 83c8c021ed049b9dfb78625d1adaf3f0102d17d39e4636b8b55bdb7cee45fb27 from storage" id=775a3a0a-29be-4d95-a32d-fd003b44bfd5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:03:10 functional-744288 crio[5849]: time="2025-10-14T20:03:10.637852729Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-functional-744288_kube-system_07f65d41bdafe0b0f1a2009eadad0a38_0" id=775a3a0a-29be-4d95-a32d-fd003b44bfd5 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 20:03:14.658988   19254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 20:03:14.659871   19254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 20:03:14.661533   19254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 20:03:14.662028   19254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 20:03:14.663671   19254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 91 42 a8 46 f5 08 06
	[ +33.952077] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff aa 17 60 38 7c 63 08 06
	[  +0.961658] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ba 77 81 98 05 a1 08 06
	[  +0.043334] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 8a 2f 57 55 4c d7 08 06
	[  +4.681081] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f6 ac 51 a4 b2 5a 08 06
	[Oct14 19:04] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e6 d1 de e1 85 25 08 06
	[  +0.899784] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ae f2 fe 51 3d 95 08 06
	[  +0.040749] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 82 c6 36 89 b8 4a 08 06
	[  +4.162603] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c6 62 d5 5c 0e ef 08 06
	[ +30.801371] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 06 ae e5 28 c7 39 08 06
	[  +0.817897] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ae 5e 5e 90 62 1a 08 06
	[  +0.046959] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 96 8f 1b bd b9 9f 08 06
	[Oct14 19:05] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 56 59 c3 7c db c4 08 06
	
	
	==> kernel <==
	 20:03:14 up  2:45,  0 user,  load average: 0.04, 0.21, 0.92
	Linux functional-744288 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 14 20:03:05 functional-744288 kubelet[15039]: E1014 20:03:05.639486   15039 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-functional-744288" podUID="e9679524bf37cc2b727411d0e5a93bfe"
	Oct 14 20:03:06 functional-744288 kubelet[15039]: E1014 20:03:06.274985   15039 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-744288?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Oct 14 20:03:06 functional-744288 kubelet[15039]: I1014 20:03:06.468922   15039 kubelet_node_status.go:75] "Attempting to register node" node="functional-744288"
	Oct 14 20:03:06 functional-744288 kubelet[15039]: E1014 20:03:06.469293   15039 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-744288"
	Oct 14 20:03:06 functional-744288 kubelet[15039]: E1014 20:03:06.664343   15039 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://192.168.49.2:8441/api/v1/namespaces/default/events/functional-744288.186e73b01dda9b53\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-744288.186e73b01dda9b53  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-744288,UID:functional-744288,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node functional-744288 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:functional-744288,},FirstTimestamp:2025-10-14 19:55:04.600746835 +0000 UTC m=+0.555296610,LastTimestamp:2025-10-14 19:55:04.602195006 +0000 UTC m=+0.556744787,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,Repo
rtingInstance:functional-744288,}"
	Oct 14 20:03:07 functional-744288 kubelet[15039]: E1014 20:03:07.610734   15039 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-744288\" not found" node="functional-744288"
	Oct 14 20:03:07 functional-744288 kubelet[15039]: E1014 20:03:07.638034   15039 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 14 20:03:07 functional-744288 kubelet[15039]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 14 20:03:07 functional-744288 kubelet[15039]:  > podSandboxID="3c54d3192ed1a94339d7aeaa1e4937313dec117490489404c0f549da6defb72e"
	Oct 14 20:03:07 functional-744288 kubelet[15039]: E1014 20:03:07.638167   15039 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 14 20:03:07 functional-744288 kubelet[15039]:         container kube-apiserver start failed in pod kube-apiserver-functional-744288_kube-system(5ce31098ce493b77069c880f0c6ac8e6): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 14 20:03:07 functional-744288 kubelet[15039]:  > logger="UnhandledError"
	Oct 14 20:03:07 functional-744288 kubelet[15039]: E1014 20:03:07.638214   15039 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-functional-744288" podUID="5ce31098ce493b77069c880f0c6ac8e6"
	Oct 14 20:03:10 functional-744288 kubelet[15039]: E1014 20:03:10.610985   15039 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-744288\" not found" node="functional-744288"
	Oct 14 20:03:10 functional-744288 kubelet[15039]: E1014 20:03:10.638152   15039 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 14 20:03:10 functional-744288 kubelet[15039]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 14 20:03:10 functional-744288 kubelet[15039]:  > podSandboxID="ee064544bd4e42055c680fa907a0270df315e93c45e8bb6818d6d53626d20a55"
	Oct 14 20:03:10 functional-744288 kubelet[15039]: E1014 20:03:10.638257   15039 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 14 20:03:10 functional-744288 kubelet[15039]:         container etcd start failed in pod etcd-functional-744288_kube-system(07f65d41bdafe0b0f1a2009eadad0a38): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 14 20:03:10 functional-744288 kubelet[15039]:  > logger="UnhandledError"
	Oct 14 20:03:10 functional-744288 kubelet[15039]: E1014 20:03:10.638287   15039 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-functional-744288" podUID="07f65d41bdafe0b0f1a2009eadad0a38"
	Oct 14 20:03:13 functional-744288 kubelet[15039]: E1014 20:03:13.275949   15039 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-744288?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Oct 14 20:03:13 functional-744288 kubelet[15039]: I1014 20:03:13.471149   15039 kubelet_node_status.go:75] "Attempting to register node" node="functional-744288"
	Oct 14 20:03:13 functional-744288 kubelet[15039]: E1014 20:03:13.471598   15039 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-744288"
	Oct 14 20:03:14 functional-744288 kubelet[15039]: E1014 20:03:14.639665   15039 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-744288\" not found"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-744288 -n functional-744288
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-744288 -n functional-744288: exit status 2 (308.877495ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-744288" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (241.57s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (1.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-744288 replace --force -f testdata/mysql.yaml
functional_test.go:1798: (dbg) Non-zero exit: kubectl --context functional-744288 replace --force -f testdata/mysql.yaml: exit status 1 (58.799074ms)

                                                
                                                
** stderr ** 
	E1014 19:59:25.747667  466221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1014 19:59:25.748348  466221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	unable to recognize "testdata/mysql.yaml": Get "https://192.168.49.2:8441/api?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused
	unable to recognize "testdata/mysql.yaml": Get "https://192.168.49.2:8441/api?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test.go:1800: failed to kubectl replace mysql: args "kubectl --context functional-744288 replace --force -f testdata/mysql.yaml" failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/MySQL]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/MySQL]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-744288
helpers_test.go:243: (dbg) docker inspect functional-744288:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ca55a7aa3ac1a055c5c96669019f558795a313505e680734901ed4465642a23a",
	        "Created": "2025-10-14T19:32:11.700856501Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 432350,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-14T19:32:11.736879267Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/ca55a7aa3ac1a055c5c96669019f558795a313505e680734901ed4465642a23a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ca55a7aa3ac1a055c5c96669019f558795a313505e680734901ed4465642a23a/hostname",
	        "HostsPath": "/var/lib/docker/containers/ca55a7aa3ac1a055c5c96669019f558795a313505e680734901ed4465642a23a/hosts",
	        "LogPath": "/var/lib/docker/containers/ca55a7aa3ac1a055c5c96669019f558795a313505e680734901ed4465642a23a/ca55a7aa3ac1a055c5c96669019f558795a313505e680734901ed4465642a23a-json.log",
	        "Name": "/functional-744288",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-744288:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-744288",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ca55a7aa3ac1a055c5c96669019f558795a313505e680734901ed4465642a23a",
	                "LowerDir": "/var/lib/docker/overlay2/7474e2653fad8f96d0b2edb2947dba52f432e8f6324d88224718eafd377e5e8b-init/diff:/var/lib/docker/overlay2/51cca15559205c73df9571f03495ed8a29f085405673af5fef2c1ba7c695d8af/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7474e2653fad8f96d0b2edb2947dba52f432e8f6324d88224718eafd377e5e8b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7474e2653fad8f96d0b2edb2947dba52f432e8f6324d88224718eafd377e5e8b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7474e2653fad8f96d0b2edb2947dba52f432e8f6324d88224718eafd377e5e8b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-744288",
	                "Source": "/var/lib/docker/volumes/functional-744288/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-744288",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-744288",
	                "name.minikube.sigs.k8s.io": "functional-744288",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "697bcb3f8caffcc21274484944886a08e42751728789c447339536a0c178eab6",
	            "SandboxKey": "/var/run/docker/netns/697bcb3f8caf",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32898"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32899"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32902"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32900"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32901"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-744288": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b6:cd:04:2d:45:3c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a03a168f8787c5e4cc68b4fb2601645260a265eddce7ef101976f25cd32bdabb",
	                    "EndpointID": "c7c49faacc3b14ce4dd3f84ad70993167a6a8b1490739ae12d7ca1980d176ee7",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-744288",
	                        "ca55a7aa3ac1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-744288 -n functional-744288
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-744288 -n functional-744288: exit status 2 (339.959084ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctional/parallel/MySQL FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/MySQL]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-744288 logs -n 25
helpers_test.go:260: TestFunctional/parallel/MySQL logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                              ARGS                                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-744288 image ls                                                                                                                                      │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ ssh     │ functional-744288 ssh findmnt -T /mount-9p | grep 9p                                                                                                            │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ image   │ functional-744288 image load --daemon kicbase/echo-server:functional-744288 --alsologtostderr                                                                   │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ ssh     │ functional-744288 ssh -- ls -la /mount-9p                                                                                                                       │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ ssh     │ functional-744288 ssh sudo umount -f /mount-9p                                                                                                                  │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │                     │
	│ image   │ functional-744288 image ls                                                                                                                                      │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ ssh     │ functional-744288 ssh findmnt -T /mount1                                                                                                                        │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │                     │
	│ mount   │ -p functional-744288 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3629867237/001:/mount2 --alsologtostderr -v=1                                              │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │                     │
	│ mount   │ -p functional-744288 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3629867237/001:/mount1 --alsologtostderr -v=1                                              │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │                     │
	│ mount   │ -p functional-744288 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3629867237/001:/mount3 --alsologtostderr -v=1                                              │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │                     │
	│ image   │ functional-744288 image load --daemon kicbase/echo-server:functional-744288 --alsologtostderr                                                                   │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ ssh     │ functional-744288 ssh findmnt -T /mount1                                                                                                                        │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ ssh     │ functional-744288 ssh findmnt -T /mount2                                                                                                                        │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ ssh     │ functional-744288 ssh findmnt -T /mount3                                                                                                                        │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ image   │ functional-744288 image ls                                                                                                                                      │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ mount   │ -p functional-744288 --kill=true                                                                                                                                │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │                     │
	│ image   │ functional-744288 image save kicbase/echo-server:functional-744288 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ image   │ functional-744288 image rm kicbase/echo-server:functional-744288 --alsologtostderr                                                                              │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ image   │ functional-744288 image ls                                                                                                                                      │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ image   │ functional-744288 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr                                       │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ image   │ functional-744288 image save --daemon kicbase/echo-server:functional-744288 --alsologtostderr                                                                   │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ ssh     │ functional-744288 ssh sudo cat /etc/ssl/certs/417373.pem                                                                                                        │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ ssh     │ functional-744288 ssh sudo cat /etc/test/nested/copy/417373/hosts                                                                                               │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │                     │
	│ ssh     │ functional-744288 ssh sudo cat /usr/share/ca-certificates/417373.pem                                                                                            │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ ssh     │ functional-744288 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                        │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/14 19:59:15
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1014 19:59:15.984416  461207 out.go:360] Setting OutFile to fd 1 ...
	I1014 19:59:15.984586  461207 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 19:59:15.984597  461207 out.go:374] Setting ErrFile to fd 2...
	I1014 19:59:15.984604  461207 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 19:59:15.985010  461207 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-413763/.minikube/bin
	I1014 19:59:15.985511  461207 out.go:368] Setting JSON to false
	I1014 19:59:15.986502  461207 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":9702,"bootTime":1760462254,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1014 19:59:15.986600  461207 start.go:141] virtualization: kvm guest
	I1014 19:59:15.988840  461207 out.go:179] * [functional-744288] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1014 19:59:15.990551  461207 out.go:179]   - MINIKUBE_LOCATION=21409
	I1014 19:59:15.990567  461207 notify.go:220] Checking for updates...
	I1014 19:59:15.993365  461207 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 19:59:15.994948  461207 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-413763/kubeconfig
	I1014 19:59:15.997169  461207 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-413763/.minikube
	I1014 19:59:15.999150  461207 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1014 19:59:16.000873  461207 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 19:59:16.003345  461207 config.go:182] Loaded profile config "functional-744288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 19:59:16.004102  461207 driver.go:421] Setting default libvirt URI to qemu:///system
	I1014 19:59:16.029353  461207 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1014 19:59:16.029472  461207 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 19:59:16.097661  461207 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-14 19:59:16.086601927 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1014 19:59:16.097897  461207 docker.go:318] overlay module found
	I1014 19:59:16.099803  461207 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1014 19:59:16.101025  461207 start.go:305] selected driver: docker
	I1014 19:59:16.101045  461207 start.go:925] validating driver "docker" against &{Name:functional-744288 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-744288 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 19:59:16.101172  461207 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 19:59:16.103591  461207 out.go:203] 
	W1014 19:59:16.105109  461207 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1014 19:59:16.106244  461207 out.go:203] 
	
	
	==> CRI-O <==
	Oct 14 19:59:23 functional-744288 crio[5849]: time="2025-10-14T19:59:23.689566766Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-744288" id=b60b3fef-7230-42d2-a9a6-cef01db1efe0 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 19:59:23 functional-744288 crio[5849]: time="2025-10-14T19:59:23.689733197Z" level=info msg="Image localhost/kicbase/echo-server:functional-744288 not found" id=b60b3fef-7230-42d2-a9a6-cef01db1efe0 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 19:59:23 functional-744288 crio[5849]: time="2025-10-14T19:59:23.689793437Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-744288 found" id=b60b3fef-7230-42d2-a9a6-cef01db1efe0 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 19:59:24 functional-744288 crio[5849]: time="2025-10-14T19:59:24.519464082Z" level=info msg="Checking image status: kicbase/echo-server:functional-744288" id=d608cc27-57d8-4e14-a0ff-40bd3bd6564a name=/runtime.v1.ImageService/ImageStatus
	Oct 14 19:59:24 functional-744288 crio[5849]: time="2025-10-14T19:59:24.549351017Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-744288" id=62981681-0fa8-406b-8b2d-cf2142d24eca name=/runtime.v1.ImageService/ImageStatus
	Oct 14 19:59:24 functional-744288 crio[5849]: time="2025-10-14T19:59:24.5495116Z" level=info msg="Image docker.io/kicbase/echo-server:functional-744288 not found" id=62981681-0fa8-406b-8b2d-cf2142d24eca name=/runtime.v1.ImageService/ImageStatus
	Oct 14 19:59:24 functional-744288 crio[5849]: time="2025-10-14T19:59:24.549557679Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-744288 found" id=62981681-0fa8-406b-8b2d-cf2142d24eca name=/runtime.v1.ImageService/ImageStatus
	Oct 14 19:59:24 functional-744288 crio[5849]: time="2025-10-14T19:59:24.576878635Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-744288" id=942b9e70-5f8a-4cb9-8791-e7084d4f23a6 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 19:59:24 functional-744288 crio[5849]: time="2025-10-14T19:59:24.577061506Z" level=info msg="Image localhost/kicbase/echo-server:functional-744288 not found" id=942b9e70-5f8a-4cb9-8791-e7084d4f23a6 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 19:59:24 functional-744288 crio[5849]: time="2025-10-14T19:59:24.577109506Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-744288 found" id=942b9e70-5f8a-4cb9-8791-e7084d4f23a6 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 19:59:25 functional-744288 crio[5849]: time="2025-10-14T19:59:25.6111808Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=d6fe3530-6964-40d8-b091-0d302e9eb830 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 19:59:25 functional-744288 crio[5849]: time="2025-10-14T19:59:25.612103799Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=8fdfc975-bb51-49fb-a41c-fac48fe56e54 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 19:59:25 functional-744288 crio[5849]: time="2025-10-14T19:59:25.61314511Z" level=info msg="Creating container: kube-system/kube-controller-manager-functional-744288/kube-controller-manager" id=08c788f0-3b41-4261-af66-522ba0e9fe03 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:59:25 functional-744288 crio[5849]: time="2025-10-14T19:59:25.613403746Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 19:59:25 functional-744288 crio[5849]: time="2025-10-14T19:59:25.617136106Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 19:59:25 functional-744288 crio[5849]: time="2025-10-14T19:59:25.617554992Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 19:59:25 functional-744288 crio[5849]: time="2025-10-14T19:59:25.640303893Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=08c788f0-3b41-4261-af66-522ba0e9fe03 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:59:25 functional-744288 crio[5849]: time="2025-10-14T19:59:25.641728703Z" level=info msg="createCtr: deleting container ID bba649260e28cfb6038b76315f36b7e24c9977b15e9f09342ec58e17ce8f376b from idIndex" id=08c788f0-3b41-4261-af66-522ba0e9fe03 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:59:25 functional-744288 crio[5849]: time="2025-10-14T19:59:25.64178849Z" level=info msg="createCtr: removing container bba649260e28cfb6038b76315f36b7e24c9977b15e9f09342ec58e17ce8f376b" id=08c788f0-3b41-4261-af66-522ba0e9fe03 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:59:25 functional-744288 crio[5849]: time="2025-10-14T19:59:25.64182847Z" level=info msg="createCtr: deleting container bba649260e28cfb6038b76315f36b7e24c9977b15e9f09342ec58e17ce8f376b from storage" id=08c788f0-3b41-4261-af66-522ba0e9fe03 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:59:25 functional-744288 crio[5849]: time="2025-10-14T19:59:25.64426685Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-functional-744288_kube-system_b1fd55382fcf5a735f17d7c6c4ddad91_0" id=08c788f0-3b41-4261-af66-522ba0e9fe03 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:59:26 functional-744288 crio[5849]: time="2025-10-14T19:59:26.611156843Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=564db030-3590-445a-8a4e-74e96afb0844 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 19:59:26 functional-744288 crio[5849]: time="2025-10-14T19:59:26.612196301Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=dbca25b3-d030-451c-be76-18c03d06ee96 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 19:59:26 functional-744288 crio[5849]: time="2025-10-14T19:59:26.613316723Z" level=info msg="Creating container: kube-system/kube-apiserver-functional-744288/kube-apiserver" id=3f1819ca-f87f-470a-9400-4b1b459a84fe name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:59:26 functional-744288 crio[5849]: time="2025-10-14T19:59:26.613608655Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:59:26.728257   18119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:59:26.728853   18119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:59:26.730715   18119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:59:26.731326   18119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:59:26.733035   18119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 91 42 a8 46 f5 08 06
	[ +33.952077] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff aa 17 60 38 7c 63 08 06
	[  +0.961658] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ba 77 81 98 05 a1 08 06
	[  +0.043334] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 8a 2f 57 55 4c d7 08 06
	[  +4.681081] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f6 ac 51 a4 b2 5a 08 06
	[Oct14 19:04] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e6 d1 de e1 85 25 08 06
	[  +0.899784] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ae f2 fe 51 3d 95 08 06
	[  +0.040749] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 82 c6 36 89 b8 4a 08 06
	[  +4.162603] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c6 62 d5 5c 0e ef 08 06
	[ +30.801371] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 06 ae e5 28 c7 39 08 06
	[  +0.817897] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ae 5e 5e 90 62 1a 08 06
	[  +0.046959] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 96 8f 1b bd b9 9f 08 06
	[Oct14 19:05] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 56 59 c3 7c db c4 08 06
	
	
	==> kernel <==
	 19:59:26 up  2:41,  0 user,  load average: 1.62, 0.46, 1.19
	Linux functional-744288 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 14 19:59:23 functional-744288 kubelet[15039]: E1014 19:59:23.635224   15039 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 14 19:59:23 functional-744288 kubelet[15039]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 14 19:59:23 functional-744288 kubelet[15039]:  > podSandboxID="ee064544bd4e42055c680fa907a0270df315e93c45e8bb6818d6d53626d20a55"
	Oct 14 19:59:23 functional-744288 kubelet[15039]: E1014 19:59:23.635333   15039 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 14 19:59:23 functional-744288 kubelet[15039]:         container etcd start failed in pod etcd-functional-744288_kube-system(07f65d41bdafe0b0f1a2009eadad0a38): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 14 19:59:23 functional-744288 kubelet[15039]:  > logger="UnhandledError"
	Oct 14 19:59:23 functional-744288 kubelet[15039]: E1014 19:59:23.635364   15039 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-functional-744288" podUID="07f65d41bdafe0b0f1a2009eadad0a38"
	Oct 14 19:59:24 functional-744288 kubelet[15039]: E1014 19:59:24.624931   15039 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-744288\" not found"
	Oct 14 19:59:25 functional-744288 kubelet[15039]: E1014 19:59:25.378742   15039 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8441/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-744288.186e73b01ddb1340  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-744288,UID:functional-744288,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-744288 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-744288,},FirstTimestamp:2025-10-14 19:55:04.600777536 +0000 UTC m=+0.555327311,LastTimestamp:2025-10-14 19:55:04.600777536 +0000 UTC m=+0.555327311,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-744288,}"
	Oct 14 19:59:25 functional-744288 kubelet[15039]: E1014 19:59:25.610638   15039 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-744288\" not found" node="functional-744288"
	Oct 14 19:59:25 functional-744288 kubelet[15039]: E1014 19:59:25.644615   15039 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 14 19:59:25 functional-744288 kubelet[15039]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 14 19:59:25 functional-744288 kubelet[15039]:  > podSandboxID="834c7ad581f3fcc6f5d04a9ecdd22e99efde1b20033a85c33ba33f7567fe39fc"
	Oct 14 19:59:25 functional-744288 kubelet[15039]: E1014 19:59:25.644733   15039 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 14 19:59:25 functional-744288 kubelet[15039]:         container kube-controller-manager start failed in pod kube-controller-manager-functional-744288_kube-system(b1fd55382fcf5a735f17d7c6c4ddad91): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 14 19:59:25 functional-744288 kubelet[15039]:  > logger="UnhandledError"
	Oct 14 19:59:25 functional-744288 kubelet[15039]: E1014 19:59:25.644811   15039 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-functional-744288" podUID="b1fd55382fcf5a735f17d7c6c4ddad91"
	Oct 14 19:59:26 functional-744288 kubelet[15039]: E1014 19:59:26.610630   15039 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-744288\" not found" node="functional-744288"
	Oct 14 19:59:26 functional-744288 kubelet[15039]: E1014 19:59:26.638993   15039 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 14 19:59:26 functional-744288 kubelet[15039]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 14 19:59:26 functional-744288 kubelet[15039]:  > podSandboxID="3c54d3192ed1a94339d7aeaa1e4937313dec117490489404c0f549da6defb72e"
	Oct 14 19:59:26 functional-744288 kubelet[15039]: E1014 19:59:26.639144   15039 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 14 19:59:26 functional-744288 kubelet[15039]:         container kube-apiserver start failed in pod kube-apiserver-functional-744288_kube-system(5ce31098ce493b77069c880f0c6ac8e6): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 14 19:59:26 functional-744288 kubelet[15039]:  > logger="UnhandledError"
	Oct 14 19:59:26 functional-744288 kubelet[15039]: E1014 19:59:26.639191   15039 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-functional-744288" podUID="5ce31098ce493b77069c880f0c6ac8e6"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-744288 -n functional-744288
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-744288 -n functional-744288: exit status 2 (327.21401ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-744288" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/parallel/MySQL (1.44s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (1.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-744288 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:234: (dbg) Non-zero exit: kubectl --context functional-744288 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (55.239073ms)

                                                
                                                
** stderr ** 
	E1014 19:59:24.050654  465206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1014 19:59:24.051208  465206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1014 19:59:24.052696  465206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1014 19:59:24.053037  465206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1014 19:59:24.054462  465206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:236: failed to 'kubectl get nodes' with args "kubectl --context functional-744288 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:242: expected to have label "minikube.k8s.io/commit" in node labels but got : 
** stderr ** 
	E1014 19:59:24.050654  465206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1014 19:59:24.051208  465206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1014 19:59:24.052696  465206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1014 19:59:24.053037  465206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1014 19:59:24.054462  465206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/version" in node labels but got : 
** stderr ** 
	E1014 19:59:24.050654  465206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1014 19:59:24.051208  465206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1014 19:59:24.052696  465206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1014 19:59:24.053037  465206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1014 19:59:24.054462  465206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
** stderr ** 
	E1014 19:59:24.050654  465206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1014 19:59:24.051208  465206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1014 19:59:24.052696  465206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1014 19:59:24.053037  465206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1014 19:59:24.054462  465206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/name" in node labels but got : 
** stderr ** 
	E1014 19:59:24.050654  465206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1014 19:59:24.051208  465206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1014 19:59:24.052696  465206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1014 19:59:24.053037  465206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1014 19:59:24.054462  465206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/primary" in node labels but got : 
** stderr ** 
	E1014 19:59:24.050654  465206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1014 19:59:24.051208  465206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1014 19:59:24.052696  465206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1014 19:59:24.053037  465206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1014 19:59:24.054462  465206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/NodeLabels]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/NodeLabels]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-744288
helpers_test.go:243: (dbg) docker inspect functional-744288:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ca55a7aa3ac1a055c5c96669019f558795a313505e680734901ed4465642a23a",
	        "Created": "2025-10-14T19:32:11.700856501Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 432350,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-14T19:32:11.736879267Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/ca55a7aa3ac1a055c5c96669019f558795a313505e680734901ed4465642a23a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ca55a7aa3ac1a055c5c96669019f558795a313505e680734901ed4465642a23a/hostname",
	        "HostsPath": "/var/lib/docker/containers/ca55a7aa3ac1a055c5c96669019f558795a313505e680734901ed4465642a23a/hosts",
	        "LogPath": "/var/lib/docker/containers/ca55a7aa3ac1a055c5c96669019f558795a313505e680734901ed4465642a23a/ca55a7aa3ac1a055c5c96669019f558795a313505e680734901ed4465642a23a-json.log",
	        "Name": "/functional-744288",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-744288:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-744288",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ca55a7aa3ac1a055c5c96669019f558795a313505e680734901ed4465642a23a",
	                "LowerDir": "/var/lib/docker/overlay2/7474e2653fad8f96d0b2edb2947dba52f432e8f6324d88224718eafd377e5e8b-init/diff:/var/lib/docker/overlay2/51cca15559205c73df9571f03495ed8a29f085405673af5fef2c1ba7c695d8af/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7474e2653fad8f96d0b2edb2947dba52f432e8f6324d88224718eafd377e5e8b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7474e2653fad8f96d0b2edb2947dba52f432e8f6324d88224718eafd377e5e8b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7474e2653fad8f96d0b2edb2947dba52f432e8f6324d88224718eafd377e5e8b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-744288",
	                "Source": "/var/lib/docker/volumes/functional-744288/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-744288",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-744288",
	                "name.minikube.sigs.k8s.io": "functional-744288",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "697bcb3f8caffcc21274484944886a08e42751728789c447339536a0c178eab6",
	            "SandboxKey": "/var/run/docker/netns/697bcb3f8caf",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32898"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32899"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32902"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32900"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32901"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-744288": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b6:cd:04:2d:45:3c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a03a168f8787c5e4cc68b4fb2601645260a265eddce7ef101976f25cd32bdabb",
	                    "EndpointID": "c7c49faacc3b14ce4dd3f84ad70993167a6a8b1490739ae12d7ca1980d176ee7",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-744288",
	                        "ca55a7aa3ac1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-744288 -n functional-744288
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-744288 -n functional-744288: exit status 2 (316.401122ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctional/parallel/NodeLabels FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/NodeLabels]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-744288 logs -n 25
helpers_test.go:260: TestFunctional/parallel/NodeLabels logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                              ARGS                                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-744288 ssh -- ls -la /mount-9p                                                                                                                       │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ ssh     │ functional-744288 ssh cat /mount-9p/test-1760471957853366962                                                                                                    │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ ssh     │ functional-744288 ssh mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates                                                                                │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │                     │
	│ ssh     │ functional-744288 ssh sudo umount -f /mount-9p                                                                                                                  │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ mount   │ -p functional-744288 /tmp/TestFunctionalparallelMountCmdspecific-port2832358850/001:/mount-9p --alsologtostderr -v=1 --port 46464                               │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │                     │
	│ ssh     │ functional-744288 ssh findmnt -T /mount-9p | grep 9p                                                                                                            │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │                     │
	│ image   │ functional-744288 image load --daemon kicbase/echo-server:functional-744288 --alsologtostderr                                                                   │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ image   │ functional-744288 image ls                                                                                                                                      │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ ssh     │ functional-744288 ssh findmnt -T /mount-9p | grep 9p                                                                                                            │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ image   │ functional-744288 image load --daemon kicbase/echo-server:functional-744288 --alsologtostderr                                                                   │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ ssh     │ functional-744288 ssh -- ls -la /mount-9p                                                                                                                       │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ ssh     │ functional-744288 ssh sudo umount -f /mount-9p                                                                                                                  │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │                     │
	│ image   │ functional-744288 image ls                                                                                                                                      │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ ssh     │ functional-744288 ssh findmnt -T /mount1                                                                                                                        │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │                     │
	│ mount   │ -p functional-744288 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3629867237/001:/mount2 --alsologtostderr -v=1                                              │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │                     │
	│ mount   │ -p functional-744288 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3629867237/001:/mount1 --alsologtostderr -v=1                                              │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │                     │
	│ mount   │ -p functional-744288 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3629867237/001:/mount3 --alsologtostderr -v=1                                              │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │                     │
	│ image   │ functional-744288 image load --daemon kicbase/echo-server:functional-744288 --alsologtostderr                                                                   │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ ssh     │ functional-744288 ssh findmnt -T /mount1                                                                                                                        │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ ssh     │ functional-744288 ssh findmnt -T /mount2                                                                                                                        │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ ssh     │ functional-744288 ssh findmnt -T /mount3                                                                                                                        │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ image   │ functional-744288 image ls                                                                                                                                      │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ mount   │ -p functional-744288 --kill=true                                                                                                                                │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │                     │
	│ image   │ functional-744288 image save kicbase/echo-server:functional-744288 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ image   │ functional-744288 image rm kicbase/echo-server:functional-744288 --alsologtostderr                                                                              │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/14 19:59:15
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1014 19:59:15.984416  461207 out.go:360] Setting OutFile to fd 1 ...
	I1014 19:59:15.984586  461207 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 19:59:15.984597  461207 out.go:374] Setting ErrFile to fd 2...
	I1014 19:59:15.984604  461207 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 19:59:15.985010  461207 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-413763/.minikube/bin
	I1014 19:59:15.985511  461207 out.go:368] Setting JSON to false
	I1014 19:59:15.986502  461207 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":9702,"bootTime":1760462254,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1014 19:59:15.986600  461207 start.go:141] virtualization: kvm guest
	I1014 19:59:15.988840  461207 out.go:179] * [functional-744288] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1014 19:59:15.990551  461207 out.go:179]   - MINIKUBE_LOCATION=21409
	I1014 19:59:15.990567  461207 notify.go:220] Checking for updates...
	I1014 19:59:15.993365  461207 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 19:59:15.994948  461207 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-413763/kubeconfig
	I1014 19:59:15.997169  461207 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-413763/.minikube
	I1014 19:59:15.999150  461207 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1014 19:59:16.000873  461207 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 19:59:16.003345  461207 config.go:182] Loaded profile config "functional-744288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 19:59:16.004102  461207 driver.go:421] Setting default libvirt URI to qemu:///system
	I1014 19:59:16.029353  461207 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1014 19:59:16.029472  461207 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 19:59:16.097661  461207 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-14 19:59:16.086601927 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1014 19:59:16.097897  461207 docker.go:318] overlay module found
	I1014 19:59:16.099803  461207 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1014 19:59:16.101025  461207 start.go:305] selected driver: docker
	I1014 19:59:16.101045  461207 start.go:925] validating driver "docker" against &{Name:functional-744288 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-744288 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 19:59:16.101172  461207 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 19:59:16.103591  461207 out.go:203] 
	W1014 19:59:16.105109  461207 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1014 19:59:16.106244  461207 out.go:203] 
	
	
	==> CRI-O <==
	Oct 14 19:59:23 functional-744288 crio[5849]: time="2025-10-14T19:59:23.611259631Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=226bf2ec-6278-4359-8f0d-27302e66098f name=/runtime.v1.ImageService/ImageStatus
	Oct 14 19:59:23 functional-744288 crio[5849]: time="2025-10-14T19:59:23.612310074Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=a47c95f3-f133-4e8a-ac21-664912b59b22 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 19:59:23 functional-744288 crio[5849]: time="2025-10-14T19:59:23.613268301Z" level=info msg="Creating container: kube-system/etcd-functional-744288/etcd" id=a0e156ea-6bee-4751-87b3-4eb75325fbf2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:59:23 functional-744288 crio[5849]: time="2025-10-14T19:59:23.613512103Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 19:59:23 functional-744288 crio[5849]: time="2025-10-14T19:59:23.616878709Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 19:59:23 functional-744288 crio[5849]: time="2025-10-14T19:59:23.617286822Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 19:59:23 functional-744288 crio[5849]: time="2025-10-14T19:59:23.6304538Z" level=info msg="Checking image status: kicbase/echo-server:functional-744288" id=fcaff8e5-6f6d-4d4c-b73d-54a4e7f3737a name=/runtime.v1.ImageService/ImageStatus
	Oct 14 19:59:23 functional-744288 crio[5849]: time="2025-10-14T19:59:23.6310367Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=a0e156ea-6bee-4751-87b3-4eb75325fbf2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:59:23 functional-744288 crio[5849]: time="2025-10-14T19:59:23.632549227Z" level=info msg="createCtr: deleting container ID cebd07e4dd9d7707605eefb02b297d43890fc9ff7cf319e1fd1e5e2fecc41910 from idIndex" id=a0e156ea-6bee-4751-87b3-4eb75325fbf2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:59:23 functional-744288 crio[5849]: time="2025-10-14T19:59:23.632595248Z" level=info msg="createCtr: removing container cebd07e4dd9d7707605eefb02b297d43890fc9ff7cf319e1fd1e5e2fecc41910" id=a0e156ea-6bee-4751-87b3-4eb75325fbf2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:59:23 functional-744288 crio[5849]: time="2025-10-14T19:59:23.63263699Z" level=info msg="createCtr: deleting container cebd07e4dd9d7707605eefb02b297d43890fc9ff7cf319e1fd1e5e2fecc41910 from storage" id=a0e156ea-6bee-4751-87b3-4eb75325fbf2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:59:23 functional-744288 crio[5849]: time="2025-10-14T19:59:23.634900345Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-functional-744288_kube-system_07f65d41bdafe0b0f1a2009eadad0a38_0" id=a0e156ea-6bee-4751-87b3-4eb75325fbf2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 19:59:23 functional-744288 crio[5849]: time="2025-10-14T19:59:23.660374612Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-744288" id=557f40b8-8aa4-45ea-be41-87afcdca6618 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 19:59:23 functional-744288 crio[5849]: time="2025-10-14T19:59:23.660534732Z" level=info msg="Image docker.io/kicbase/echo-server:functional-744288 not found" id=557f40b8-8aa4-45ea-be41-87afcdca6618 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 19:59:23 functional-744288 crio[5849]: time="2025-10-14T19:59:23.660583523Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-744288 found" id=557f40b8-8aa4-45ea-be41-87afcdca6618 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 19:59:23 functional-744288 crio[5849]: time="2025-10-14T19:59:23.689566766Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-744288" id=b60b3fef-7230-42d2-a9a6-cef01db1efe0 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 19:59:23 functional-744288 crio[5849]: time="2025-10-14T19:59:23.689733197Z" level=info msg="Image localhost/kicbase/echo-server:functional-744288 not found" id=b60b3fef-7230-42d2-a9a6-cef01db1efe0 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 19:59:23 functional-744288 crio[5849]: time="2025-10-14T19:59:23.689793437Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-744288 found" id=b60b3fef-7230-42d2-a9a6-cef01db1efe0 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 19:59:24 functional-744288 crio[5849]: time="2025-10-14T19:59:24.519464082Z" level=info msg="Checking image status: kicbase/echo-server:functional-744288" id=d608cc27-57d8-4e14-a0ff-40bd3bd6564a name=/runtime.v1.ImageService/ImageStatus
	Oct 14 19:59:24 functional-744288 crio[5849]: time="2025-10-14T19:59:24.549351017Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-744288" id=62981681-0fa8-406b-8b2d-cf2142d24eca name=/runtime.v1.ImageService/ImageStatus
	Oct 14 19:59:24 functional-744288 crio[5849]: time="2025-10-14T19:59:24.5495116Z" level=info msg="Image docker.io/kicbase/echo-server:functional-744288 not found" id=62981681-0fa8-406b-8b2d-cf2142d24eca name=/runtime.v1.ImageService/ImageStatus
	Oct 14 19:59:24 functional-744288 crio[5849]: time="2025-10-14T19:59:24.549557679Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-744288 found" id=62981681-0fa8-406b-8b2d-cf2142d24eca name=/runtime.v1.ImageService/ImageStatus
	Oct 14 19:59:24 functional-744288 crio[5849]: time="2025-10-14T19:59:24.576878635Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-744288" id=942b9e70-5f8a-4cb9-8791-e7084d4f23a6 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 19:59:24 functional-744288 crio[5849]: time="2025-10-14T19:59:24.577061506Z" level=info msg="Image localhost/kicbase/echo-server:functional-744288 not found" id=942b9e70-5f8a-4cb9-8791-e7084d4f23a6 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 19:59:24 functional-744288 crio[5849]: time="2025-10-14T19:59:24.577109506Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-744288 found" id=942b9e70-5f8a-4cb9-8791-e7084d4f23a6 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 19:59:24.990710   17849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:59:24.991373   17849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:59:24.993099   17849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:59:24.993596   17849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1014 19:59:24.995211   17849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 91 42 a8 46 f5 08 06
	[ +33.952077] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff aa 17 60 38 7c 63 08 06
	[  +0.961658] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ba 77 81 98 05 a1 08 06
	[  +0.043334] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 8a 2f 57 55 4c d7 08 06
	[  +4.681081] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f6 ac 51 a4 b2 5a 08 06
	[Oct14 19:04] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e6 d1 de e1 85 25 08 06
	[  +0.899784] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ae f2 fe 51 3d 95 08 06
	[  +0.040749] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 82 c6 36 89 b8 4a 08 06
	[  +4.162603] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c6 62 d5 5c 0e ef 08 06
	[ +30.801371] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 06 ae e5 28 c7 39 08 06
	[  +0.817897] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ae 5e 5e 90 62 1a 08 06
	[  +0.046959] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 96 8f 1b bd b9 9f 08 06
	[Oct14 19:05] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 56 59 c3 7c db c4 08 06
	
	
	==> kernel <==
	 19:59:25 up  2:41,  0 user,  load average: 1.49, 0.42, 1.18
	Linux functional-744288 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 14 19:59:15 functional-744288 kubelet[15039]: E1014 19:59:15.236841   15039 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-744288?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Oct 14 19:59:15 functional-744288 kubelet[15039]: E1014 19:59:15.377190   15039 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8441/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-744288.186e73b01ddb1340  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-744288,UID:functional-744288,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-744288 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-744288,},FirstTimestamp:2025-10-14 19:55:04.600777536 +0000 UTC m=+0.555327311,LastTimestamp:2025-10-14 19:55:04.600777536 +0000 UTC m=+0.555327311,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-744288,}"
	Oct 14 19:59:15 functional-744288 kubelet[15039]: I1014 19:59:15.393702   15039 kubelet_node_status.go:75] "Attempting to register node" node="functional-744288"
	Oct 14 19:59:15 functional-744288 kubelet[15039]: E1014 19:59:15.394167   15039 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-744288"
	Oct 14 19:59:18 functional-744288 kubelet[15039]: E1014 19:59:18.610260   15039 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-744288\" not found" node="functional-744288"
	Oct 14 19:59:18 functional-744288 kubelet[15039]: E1014 19:59:18.640996   15039 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 14 19:59:18 functional-744288 kubelet[15039]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 14 19:59:18 functional-744288 kubelet[15039]:  > podSandboxID="6db547a209d52d0398507b1da96eecbcd999edc615f9bed4939047b6f878db45"
	Oct 14 19:59:18 functional-744288 kubelet[15039]: E1014 19:59:18.641127   15039 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 14 19:59:18 functional-744288 kubelet[15039]:         container kube-scheduler start failed in pod kube-scheduler-functional-744288_kube-system(e9679524bf37cc2b727411d0e5a93bfe): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 14 19:59:18 functional-744288 kubelet[15039]:  > logger="UnhandledError"
	Oct 14 19:59:18 functional-744288 kubelet[15039]: E1014 19:59:18.641182   15039 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-functional-744288" podUID="e9679524bf37cc2b727411d0e5a93bfe"
	Oct 14 19:59:19 functional-744288 kubelet[15039]: E1014 19:59:19.315045   15039 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://192.168.49.2:8441/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError"
	Oct 14 19:59:22 functional-744288 kubelet[15039]: E1014 19:59:22.238324   15039 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-744288?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Oct 14 19:59:22 functional-744288 kubelet[15039]: I1014 19:59:22.396487   15039 kubelet_node_status.go:75] "Attempting to register node" node="functional-744288"
	Oct 14 19:59:22 functional-744288 kubelet[15039]: E1014 19:59:22.396931   15039 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-744288"
	Oct 14 19:59:23 functional-744288 kubelet[15039]: E1014 19:59:23.610711   15039 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-744288\" not found" node="functional-744288"
	Oct 14 19:59:23 functional-744288 kubelet[15039]: E1014 19:59:23.635224   15039 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 14 19:59:23 functional-744288 kubelet[15039]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 14 19:59:23 functional-744288 kubelet[15039]:  > podSandboxID="ee064544bd4e42055c680fa907a0270df315e93c45e8bb6818d6d53626d20a55"
	Oct 14 19:59:23 functional-744288 kubelet[15039]: E1014 19:59:23.635333   15039 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 14 19:59:23 functional-744288 kubelet[15039]:         container etcd start failed in pod etcd-functional-744288_kube-system(07f65d41bdafe0b0f1a2009eadad0a38): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 14 19:59:23 functional-744288 kubelet[15039]:  > logger="UnhandledError"
	Oct 14 19:59:23 functional-744288 kubelet[15039]: E1014 19:59:23.635364   15039 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-functional-744288" podUID="07f65d41bdafe0b0f1a2009eadad0a38"
	Oct 14 19:59:24 functional-744288 kubelet[15039]: E1014 19:59:24.624931   15039 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-744288\" not found"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-744288 -n functional-744288
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-744288 -n functional-744288: exit status 2 (322.142649ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-744288" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/parallel/NodeLabels (1.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-744288 create deployment hello-node --image kicbase/echo-server
functional_test.go:1451: (dbg) Non-zero exit: kubectl --context functional-744288 create deployment hello-node --image kicbase/echo-server: exit status 1 (60.946941ms)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://192.168.49.2:8441/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test.go:1453: failed to create hello-node deployment with this command "kubectl --context functional-744288 create deployment hello-node --image kicbase/echo-server": exit status 1.
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-744288 service list
functional_test.go:1469: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-744288 service list: exit status 103 (326.942531ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-744288 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-744288"

                                                
                                                
-- /stdout --
functional_test.go:1471: failed to do service list. args "out/minikube-linux-amd64 -p functional-744288 service list" : exit status 103
functional_test.go:1474: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-744288 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-744288\"\n"-
--- FAIL: TestFunctional/parallel/ServiceCmd/List (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-744288 service list -o json
functional_test.go:1499: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-744288 service list -o json: exit status 103 (327.678925ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-744288 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-744288"

                                                
                                                
-- /stdout --
functional_test.go:1501: failed to list services with json format. args "out/minikube-linux-amd64 -p functional-744288 service list -o json": exit status 103
--- FAIL: TestFunctional/parallel/ServiceCmd/JSONOutput (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-744288 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-744288 service --namespace=default --https --url hello-node: exit status 103 (329.631284ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-744288 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-744288"

                                                
                                                
-- /stdout --
functional_test.go:1521: failed to get service url. args "out/minikube-linux-amd64 -p functional-744288 service --namespace=default --https --url hello-node" : exit status 103
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-744288 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-744288 service hello-node --url --format={{.IP}}: exit status 103 (357.787444ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-744288 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-744288"

                                                
                                                
-- /stdout --
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-744288 service hello-node --url --format={{.IP}}": exit status 103
functional_test.go:1558: "* The control-plane node functional-744288 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-744288\"" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-744288 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-744288 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 103. stderr: I1014 19:59:12.491972  459085 out.go:360] Setting OutFile to fd 1 ...
I1014 19:59:12.492800  459085 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1014 19:59:12.492844  459085 out.go:374] Setting ErrFile to fd 2...
I1014 19:59:12.492861  459085 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1014 19:59:12.493216  459085 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-413763/.minikube/bin
I1014 19:59:12.498285  459085 mustload.go:65] Loading cluster: functional-744288
I1014 19:59:12.499452  459085 config.go:182] Loaded profile config "functional-744288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1014 19:59:12.500414  459085 cli_runner.go:164] Run: docker container inspect functional-744288 --format={{.State.Status}}
I1014 19:59:12.539542  459085 host.go:66] Checking if "functional-744288" exists ...
I1014 19:59:12.539979  459085 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1014 19:59:12.634270  459085 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:false NGoroutines:56 SystemTime:2025-10-14 19:59:12.620305266 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I1014 19:59:12.634424  459085 api_server.go:166] Checking apiserver status ...
I1014 19:59:12.634477  459085 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1014 19:59:12.634527  459085 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-744288
I1014 19:59:12.658872  459085 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/functional-744288/id_rsa Username:docker}
W1014 19:59:12.774127  459085 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1014 19:59:12.779506  459085 out.go:179] * The control-plane node functional-744288 apiserver is not running: (state=Stopped)
I1014 19:59:12.781056  459085 out.go:179]   To start a cluster, run: "minikube start -p functional-744288"

                                                
                                                
stdout: * The control-plane node functional-744288 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-744288"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-744288 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-amd64 -p functional-744288 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-amd64 -p functional-744288 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-744288 tunnel --alsologtostderr] ...
helpers_test.go:519: unable to terminate pid 459084: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-amd64 -p functional-744288 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-amd64 -p functional-744288 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-744288 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-744288 service hello-node --url: exit status 103 (345.482212ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-744288 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-744288"

                                                
                                                
-- /stdout --
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-amd64 -p functional-744288 service hello-node --url": exit status 103
functional_test.go:1575: found endpoint for hello-node: * The control-plane node functional-744288 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-744288"
functional_test.go:1579: failed to parse "* The control-plane node functional-744288 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-744288\"": parse "* The control-plane node functional-744288 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-744288\"": net/url: invalid control character in URL
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-744288 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:212: (dbg) Non-zero exit: kubectl --context functional-744288 apply -f testdata/testsvc.yaml: exit status 1 (78.717984ms)

                                                
                                                
** stderr ** 
	error: error validating "testdata/testsvc.yaml": error validating data: failed to download openapi: Get "https://192.168.49.2:8441/openapi/v2?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:214: kubectl --context functional-744288 apply -f testdata/testsvc.yaml failed: exit status 1
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (116.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
I1014 19:59:12.877481  417373 retry.go:31] will retry after 1.779784619s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-744288 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-744288 get svc nginx-svc: exit status 1 (52.227469ms)

                                                
                                                
** stderr ** 
	E1014 20:01:09.031999  468854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1014 20:01:09.032381  468854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1014 20:01:09.033969  468854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1014 20:01:09.034296  468854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1014 20:01:09.035774  468854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:292: kubectl --context functional-744288 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (116.16s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (2.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-744288 /tmp/TestFunctionalparallelMountCmdany-port534320135/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1760471957853366962" to /tmp/TestFunctionalparallelMountCmdany-port534320135/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1760471957853366962" to /tmp/TestFunctionalparallelMountCmdany-port534320135/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1760471957853366962" to /tmp/TestFunctionalparallelMountCmdany-port534320135/001/test-1760471957853366962
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-744288 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-744288 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (299.537672ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1014 19:59:18.153248  417373 retry.go:31] will retry after 722.214477ms: exit status 1
I1014 19:59:18.249320  417373 retry.go:31] will retry after 4.903482352s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-744288 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-744288 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct 14 19:59 created-by-test
-rw-r--r-- 1 docker docker 24 Oct 14 19:59 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct 14 19:59 test-1760471957853366962
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-744288 ssh cat /mount-9p/test-1760471957853366962
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-744288 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:148: (dbg) Non-zero exit: kubectl --context functional-744288 replace --force -f testdata/busybox-mount-test.yaml: exit status 1 (49.131382ms)

                                                
                                                
** stderr ** 
	E1014 19:59:19.745558  463199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	error: unable to recognize "testdata/busybox-mount-test.yaml": Get "https://192.168.49.2:8441/api?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test_mount_test.go:150: failed to 'kubectl replace' for busybox-mount-test. args "kubectl --context functional-744288 replace --force -f testdata/busybox-mount-test.yaml" : exit status 1
functional_test_mount_test.go:80: "TestFunctional/parallel/MountCmd/any-port" failed, getting debug info...
functional_test_mount_test.go:81: (dbg) Run:  out/minikube-linux-amd64 -p functional-744288 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates"
functional_test_mount_test.go:81: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-744288 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates": exit status 1 (273.251604ms)

                                                
                                                
-- stdout --
	192.168.49.1 on /mount-9p type 9p (rw,relatime,dfltuid=1000,dfltgid=997,access=any,msize=262144,trans=tcp,noextend,port=36899)
	total 2
	-rw-r--r-- 1 docker docker 24 Oct 14 19:59 created-by-test
	-rw-r--r-- 1 docker docker 24 Oct 14 19:59 created-by-test-removed-by-pod
	-rw-r--r-- 1 docker docker 24 Oct 14 19:59 test-1760471957853366962
	cat: /mount-9p/pod-dates: No such file or directory

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:83: debugging command "out/minikube-linux-amd64 -p functional-744288 ssh \"mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates\"" failed : exit status 1
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-744288 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-744288 /tmp/TestFunctionalparallelMountCmdany-port534320135/001:/mount-9p --alsologtostderr -v=1] ...
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-amd64 mount -p functional-744288 /tmp/TestFunctionalparallelMountCmdany-port534320135/001:/mount-9p --alsologtostderr -v=1] stdout:
* Mounting host path /tmp/TestFunctionalparallelMountCmdany-port534320135/001 into VM as /mount-9p ...
- Mount type:   9p
- User ID:      docker
- Group ID:     docker
- Version:      9p2000.L
- Message Size: 262144
- Options:      map[]
- Bind Address: 192.168.49.1:36899
* Userspace file server: 
ufs starting
* Successfully mounted /tmp/TestFunctionalparallelMountCmdany-port534320135/001 to /mount-9p

                                                
                                                
* NOTE: This process must stay alive for the mount to be accessible ...
* Unmounting /mount-9p ...

                                                
                                                

                                                
                                                
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-amd64 mount -p functional-744288 /tmp/TestFunctionalparallelMountCmdany-port534320135/001:/mount-9p --alsologtostderr -v=1] stderr:
I1014 19:59:17.903926  462426 out.go:360] Setting OutFile to fd 1 ...
I1014 19:59:17.904246  462426 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1014 19:59:17.904260  462426 out.go:374] Setting ErrFile to fd 2...
I1014 19:59:17.904266  462426 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1014 19:59:17.904652  462426 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-413763/.minikube/bin
I1014 19:59:17.905065  462426 mustload.go:65] Loading cluster: functional-744288
I1014 19:59:17.905615  462426 config.go:182] Loaded profile config "functional-744288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1014 19:59:17.906198  462426 cli_runner.go:164] Run: docker container inspect functional-744288 --format={{.State.Status}}
I1014 19:59:17.926817  462426 host.go:66] Checking if "functional-744288" exists ...
I1014 19:59:17.927091  462426 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1014 19:59:18.003291  462426 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:false NGoroutines:57 SystemTime:2025-10-14 19:59:17.991556254 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I1014 19:59:18.003528  462426 cli_runner.go:164] Run: docker network inspect functional-744288 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1014 19:59:18.030393  462426 out.go:179] * Mounting host path /tmp/TestFunctionalparallelMountCmdany-port534320135/001 into VM as /mount-9p ...
I1014 19:59:18.032206  462426 out.go:179]   - Mount type:   9p
I1014 19:59:18.034237  462426 out.go:179]   - User ID:      docker
I1014 19:59:18.035924  462426 out.go:179]   - Group ID:     docker
I1014 19:59:18.037365  462426 out.go:179]   - Version:      9p2000.L
I1014 19:59:18.038598  462426 out.go:179]   - Message Size: 262144
I1014 19:59:18.039881  462426 out.go:179]   - Options:      map[]
I1014 19:59:18.041141  462426 out.go:179]   - Bind Address: 192.168.49.1:36899
I1014 19:59:18.042527  462426 out.go:179] * Userspace file server: 
I1014 19:59:18.042637  462426 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f -l /mount-9p || echo "
I1014 19:59:18.042713  462426 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-744288
I1014 19:59:18.061451  462426 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/functional-744288/id_rsa Username:docker}
I1014 19:59:18.165303  462426 mount.go:180] unmount for /mount-9p ran successfully
I1014 19:59:18.165336  462426 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /mount-9p"
I1014 19:59:18.174666  462426 ssh_runner.go:195] Run: /bin/bash -c "sudo mount -t 9p -o dfltgid=$(grep ^docker: /etc/group | cut -d: -f3),dfltuid=$(id -u docker),msize=262144,port=36899,trans=tcp,version=9p2000.L 192.168.49.1 /mount-9p"
I1014 19:59:18.217596  462426 main.go:125] stdlog: ufs.go:141 connected
I1014 19:59:18.217852  462426 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:40098 Tversion tag 65535 msize 262144 version '9P2000.L'
I1014 19:59:18.217936  462426 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:40098 Rversion tag 65535 msize 262144 version '9P2000'
I1014 19:59:18.218182  462426 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:40098 Tattach tag 0 fid 0 afid 4294967295 uname 'nobody' nuname 0 aname ''
I1014 19:59:18.218275  462426 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:40098 Rattach tag 0 aqid (20fa0ba e44e415c 'd')
I1014 19:59:18.218514  462426 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:40098 Tstat tag 0 fid 0
I1014 19:59:18.218699  462426 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:40098 Rstat tag 0 st ('001' 'jenkins' 'balintp' '' q (20fa0ba e44e415c 'd') m d775 at 0 mt 1760471957 l 4096 t 0 d 0 ext )
I1014 19:59:18.220276  462426 lock.go:50] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/.mount-process: {Name:mked2e812321f0540cb9a34294a7ae5de2b6bd75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1014 19:59:18.220497  462426 mount.go:105] mount successful: ""
I1014 19:59:18.222419  462426 out.go:179] * Successfully mounted /tmp/TestFunctionalparallelMountCmdany-port534320135/001 to /mount-9p
I1014 19:59:18.223778  462426 out.go:203] 
I1014 19:59:18.225021  462426 out.go:179] * NOTE: This process must stay alive for the mount to be accessible ...
I1014 19:59:19.415209  462426 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:40098 Tstat tag 0 fid 0
I1014 19:59:19.415366  462426 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:40098 Rstat tag 0 st ('001' 'jenkins' 'balintp' '' q (20fa0ba e44e415c 'd') m d775 at 0 mt 1760471957 l 4096 t 0 d 0 ext )
I1014 19:59:19.415770  462426 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:40098 Twalk tag 0 fid 0 newfid 1 
I1014 19:59:19.415834  462426 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:40098 Rwalk tag 0 
I1014 19:59:19.415967  462426 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:40098 Topen tag 0 fid 1 mode 0
I1014 19:59:19.416034  462426 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:40098 Ropen tag 0 qid (20fa0ba e44e415c 'd') iounit 0
I1014 19:59:19.416183  462426 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:40098 Tstat tag 0 fid 0
I1014 19:59:19.416299  462426 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:40098 Rstat tag 0 st ('001' 'jenkins' 'balintp' '' q (20fa0ba e44e415c 'd') m d775 at 0 mt 1760471957 l 4096 t 0 d 0 ext )
I1014 19:59:19.416554  462426 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:40098 Tread tag 0 fid 1 offset 0 count 262120
I1014 19:59:19.416716  462426 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:40098 Rread tag 0 count 258
I1014 19:59:19.416887  462426 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:40098 Tread tag 0 fid 1 offset 258 count 261862
I1014 19:59:19.416938  462426 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:40098 Rread tag 0 count 0
I1014 19:59:19.417123  462426 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:40098 Tread tag 0 fid 1 offset 258 count 262120
I1014 19:59:19.417152  462426 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:40098 Rread tag 0 count 0
I1014 19:59:19.417268  462426 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:40098 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I1014 19:59:19.417308  462426 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:40098 Rwalk tag 0 (20fa0bc e44e415c '') 
I1014 19:59:19.417386  462426 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:40098 Tstat tag 0 fid 2
I1014 19:59:19.417458  462426 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:40098 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'balintp' '' q (20fa0bc e44e415c '') m 644 at 0 mt 1760471957 l 24 t 0 d 0 ext )
I1014 19:59:19.417576  462426 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:40098 Tstat tag 0 fid 2
I1014 19:59:19.417666  462426 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:40098 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'balintp' '' q (20fa0bc e44e415c '') m 644 at 0 mt 1760471957 l 24 t 0 d 0 ext )
I1014 19:59:19.417770  462426 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:40098 Tclunk tag 0 fid 2
I1014 19:59:19.417814  462426 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:40098 Rclunk tag 0
I1014 19:59:19.417923  462426 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:40098 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I1014 19:59:19.417964  462426 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:40098 Rwalk tag 0 (20fa0bb e44e415c '') 
I1014 19:59:19.418038  462426 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:40098 Tstat tag 0 fid 2
I1014 19:59:19.418103  462426 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:40098 Rstat tag 0 st ('created-by-test' 'jenkins' 'balintp' '' q (20fa0bb e44e415c '') m 644 at 0 mt 1760471957 l 24 t 0 d 0 ext )
I1014 19:59:19.418194  462426 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:40098 Tstat tag 0 fid 2
I1014 19:59:19.418288  462426 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:40098 Rstat tag 0 st ('created-by-test' 'jenkins' 'balintp' '' q (20fa0bb e44e415c '') m 644 at 0 mt 1760471957 l 24 t 0 d 0 ext )
I1014 19:59:19.418361  462426 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:40098 Tclunk tag 0 fid 2
I1014 19:59:19.418378  462426 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:40098 Rclunk tag 0
I1014 19:59:19.418443  462426 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:40098 Twalk tag 0 fid 0 newfid 2 0:'test-1760471957853366962' 
I1014 19:59:19.418477  462426 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:40098 Rwalk tag 0 (20fa0bd e44e415c '') 
I1014 19:59:19.418539  462426 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:40098 Tstat tag 0 fid 2
I1014 19:59:19.418593  462426 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:40098 Rstat tag 0 st ('test-1760471957853366962' 'jenkins' 'balintp' '' q (20fa0bd e44e415c '') m 644 at 0 mt 1760471957 l 24 t 0 d 0 ext )
I1014 19:59:19.418716  462426 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:40098 Tstat tag 0 fid 2
I1014 19:59:19.418843  462426 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:40098 Rstat tag 0 st ('test-1760471957853366962' 'jenkins' 'balintp' '' q (20fa0bd e44e415c '') m 644 at 0 mt 1760471957 l 24 t 0 d 0 ext )
I1014 19:59:19.418954  462426 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:40098 Tclunk tag 0 fid 2
I1014 19:59:19.418977  462426 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:40098 Rclunk tag 0
I1014 19:59:19.419095  462426 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:40098 Tread tag 0 fid 1 offset 258 count 262120
I1014 19:59:19.419127  462426 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:40098 Rread tag 0 count 0
I1014 19:59:19.419246  462426 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:40098 Tclunk tag 0 fid 1
I1014 19:59:19.419274  462426 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:40098 Rclunk tag 0
I1014 19:59:19.689060  462426 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:40098 Twalk tag 0 fid 0 newfid 1 0:'test-1760471957853366962' 
I1014 19:59:19.689146  462426 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:40098 Rwalk tag 0 (20fa0bd e44e415c '') 
I1014 19:59:19.689316  462426 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:40098 Tstat tag 0 fid 1
I1014 19:59:19.689437  462426 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:40098 Rstat tag 0 st ('test-1760471957853366962' 'jenkins' 'balintp' '' q (20fa0bd e44e415c '') m 644 at 0 mt 1760471957 l 24 t 0 d 0 ext )
I1014 19:59:19.689579  462426 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:40098 Twalk tag 0 fid 1 newfid 2 
I1014 19:59:19.689615  462426 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:40098 Rwalk tag 0 
I1014 19:59:19.689729  462426 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:40098 Topen tag 0 fid 2 mode 0
I1014 19:59:19.689792  462426 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:40098 Ropen tag 0 qid (20fa0bd e44e415c '') iounit 0
I1014 19:59:19.689900  462426 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:40098 Tstat tag 0 fid 1
I1014 19:59:19.689993  462426 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:40098 Rstat tag 0 st ('test-1760471957853366962' 'jenkins' 'balintp' '' q (20fa0bd e44e415c '') m 644 at 0 mt 1760471957 l 24 t 0 d 0 ext )
I1014 19:59:19.690252  462426 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:40098 Tread tag 0 fid 2 offset 0 count 24
I1014 19:59:19.690307  462426 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:40098 Rread tag 0 count 24
I1014 19:59:19.690472  462426 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:40098 Tclunk tag 0 fid 2
I1014 19:59:19.690506  462426 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:40098 Rclunk tag 0
I1014 19:59:19.690622  462426 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:40098 Tclunk tag 0 fid 1
I1014 19:59:19.690649  462426 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:40098 Rclunk tag 0
I1014 19:59:20.011549  462426 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:40098 Tstat tag 0 fid 0
I1014 19:59:20.011717  462426 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:40098 Rstat tag 0 st ('001' 'jenkins' 'balintp' '' q (20fa0ba e44e415c 'd') m d775 at 0 mt 1760471957 l 4096 t 0 d 0 ext )
I1014 19:59:20.012097  462426 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:40098 Twalk tag 0 fid 0 newfid 1 
I1014 19:59:20.012163  462426 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:40098 Rwalk tag 0 
I1014 19:59:20.012327  462426 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:40098 Topen tag 0 fid 1 mode 0
I1014 19:59:20.012406  462426 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:40098 Ropen tag 0 qid (20fa0ba e44e415c 'd') iounit 0
I1014 19:59:20.012550  462426 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:40098 Tstat tag 0 fid 0
I1014 19:59:20.012666  462426 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:40098 Rstat tag 0 st ('001' 'jenkins' 'balintp' '' q (20fa0ba e44e415c 'd') m d775 at 0 mt 1760471957 l 4096 t 0 d 0 ext )
I1014 19:59:20.012944  462426 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:40098 Tread tag 0 fid 1 offset 0 count 262120
I1014 19:59:20.013110  462426 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:40098 Rread tag 0 count 258
I1014 19:59:20.013267  462426 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:40098 Tread tag 0 fid 1 offset 258 count 261862
I1014 19:59:20.013305  462426 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:40098 Rread tag 0 count 0
I1014 19:59:20.013518  462426 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:40098 Tread tag 0 fid 1 offset 258 count 262120
I1014 19:59:20.013557  462426 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:40098 Rread tag 0 count 0
I1014 19:59:20.013711  462426 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:40098 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I1014 19:59:20.013773  462426 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:40098 Rwalk tag 0 (20fa0bc e44e415c '') 
I1014 19:59:20.013920  462426 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:40098 Tstat tag 0 fid 2
I1014 19:59:20.014009  462426 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:40098 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'balintp' '' q (20fa0bc e44e415c '') m 644 at 0 mt 1760471957 l 24 t 0 d 0 ext )
I1014 19:59:20.014140  462426 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:40098 Tstat tag 0 fid 2
I1014 19:59:20.014232  462426 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:40098 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'balintp' '' q (20fa0bc e44e415c '') m 644 at 0 mt 1760471957 l 24 t 0 d 0 ext )
I1014 19:59:20.014352  462426 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:40098 Tclunk tag 0 fid 2
I1014 19:59:20.014378  462426 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:40098 Rclunk tag 0
I1014 19:59:20.014475  462426 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:40098 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I1014 19:59:20.014526  462426 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:40098 Rwalk tag 0 (20fa0bb e44e415c '') 
I1014 19:59:20.014618  462426 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:40098 Tstat tag 0 fid 2
I1014 19:59:20.014704  462426 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:40098 Rstat tag 0 st ('created-by-test' 'jenkins' 'balintp' '' q (20fa0bb e44e415c '') m 644 at 0 mt 1760471957 l 24 t 0 d 0 ext )
I1014 19:59:20.014873  462426 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:40098 Tstat tag 0 fid 2
I1014 19:59:20.014963  462426 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:40098 Rstat tag 0 st ('created-by-test' 'jenkins' 'balintp' '' q (20fa0bb e44e415c '') m 644 at 0 mt 1760471957 l 24 t 0 d 0 ext )
I1014 19:59:20.015106  462426 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:40098 Tclunk tag 0 fid 2
I1014 19:59:20.015138  462426 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:40098 Rclunk tag 0
I1014 19:59:20.015238  462426 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:40098 Twalk tag 0 fid 0 newfid 2 0:'test-1760471957853366962' 
I1014 19:59:20.015294  462426 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:40098 Rwalk tag 0 (20fa0bd e44e415c '') 
I1014 19:59:20.015390  462426 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:40098 Tstat tag 0 fid 2
I1014 19:59:20.015480  462426 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:40098 Rstat tag 0 st ('test-1760471957853366962' 'jenkins' 'balintp' '' q (20fa0bd e44e415c '') m 644 at 0 mt 1760471957 l 24 t 0 d 0 ext )
I1014 19:59:20.015643  462426 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:40098 Tstat tag 0 fid 2
I1014 19:59:20.015737  462426 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:40098 Rstat tag 0 st ('test-1760471957853366962' 'jenkins' 'balintp' '' q (20fa0bd e44e415c '') m 644 at 0 mt 1760471957 l 24 t 0 d 0 ext )
I1014 19:59:20.015872  462426 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:40098 Tclunk tag 0 fid 2
I1014 19:59:20.015900  462426 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:40098 Rclunk tag 0
I1014 19:59:20.016056  462426 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:40098 Tread tag 0 fid 1 offset 258 count 262120
I1014 19:59:20.016096  462426 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:40098 Rread tag 0 count 0
I1014 19:59:20.016220  462426 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:40098 Tclunk tag 0 fid 1
I1014 19:59:20.016246  462426 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:40098 Rclunk tag 0
I1014 19:59:20.017321  462426 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:40098 Twalk tag 0 fid 0 newfid 1 0:'pod-dates' 
I1014 19:59:20.017375  462426 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:40098 Rerror tag 0 ename 'file not found' ecode 0
I1014 19:59:20.289657  462426 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:40098 Tclunk tag 0 fid 0
I1014 19:59:20.289720  462426 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:40098 Rclunk tag 0
I1014 19:59:20.290139  462426 main.go:125] stdlog: ufs.go:147 disconnected
I1014 19:59:20.308702  462426 out.go:179] * Unmounting /mount-9p ...
I1014 19:59:20.310035  462426 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f -l /mount-9p || echo "
I1014 19:59:20.318114  462426 mount.go:180] unmount for /mount-9p ran successfully
I1014 19:59:20.318245  462426 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/.mount-process: {Name:mked2e812321f0540cb9a34294a7ae5de2b6bd75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1014 19:59:20.319896  462426 out.go:203] 
W1014 19:59:20.321252  462426 out.go:285] X Exiting due to MK_INTERRUPTED: Received terminated signal
X Exiting due to MK_INTERRUPTED: Received terminated signal
I1014 19:59:20.322414  462426 out.go:203] 
--- FAIL: TestFunctional/parallel/MountCmd/any-port (2.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-744288 image load --daemon kicbase/echo-server:functional-744288 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-744288 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-744288" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.93s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-744288 image load --daemon kicbase/echo-server:functional-744288 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-744288 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-744288" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.93s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-744288
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-744288 image load --daemon kicbase/echo-server:functional-744288 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-744288 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-744288" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-744288 image save kicbase/echo-server:functional-744288 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-744288 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1014 19:59:24.866632  465692 out.go:360] Setting OutFile to fd 1 ...
	I1014 19:59:24.867426  465692 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 19:59:24.867445  465692 out.go:374] Setting ErrFile to fd 2...
	I1014 19:59:24.867452  465692 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 19:59:24.867940  465692 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-413763/.minikube/bin
	I1014 19:59:24.869234  465692 config.go:182] Loaded profile config "functional-744288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 19:59:24.869332  465692 config.go:182] Loaded profile config "functional-744288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 19:59:24.869742  465692 cli_runner.go:164] Run: docker container inspect functional-744288 --format={{.State.Status}}
	I1014 19:59:24.889971  465692 ssh_runner.go:195] Run: systemctl --version
	I1014 19:59:24.890027  465692 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-744288
	I1014 19:59:24.910160  465692 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/functional-744288/id_rsa Username:docker}
	I1014 19:59:25.015379  465692 cache_images.go:290] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar
	W1014 19:59:25.015452  465692 cache_images.go:254] Failed to load cached images for "functional-744288": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar: no such file or directory
	I1014 19:59:25.015484  465692 cache_images.go:266] failed pushing to: functional-744288

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-744288
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-744288 image save --daemon kicbase/echo-server:functional-744288 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-744288
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-744288: exit status 1 (19.33403ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-744288

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-744288

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (505.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-579393 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1014 20:04:12.807512  417373 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:04:12.814045  417373 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:04:12.825555  417373 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:04:12.847045  417373 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:04:12.888521  417373 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:04:12.970093  417373 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:04:13.131714  417373 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:04:13.453514  417373 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:04:14.095640  417373 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:04:15.377325  417373 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:04:17.939904  417373 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:04:23.061592  417373 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:04:33.303371  417373 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:04:53.785603  417373 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:05:34.748186  417373 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:06:56.671543  417373 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:09:12.798806  417373 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:09:40.513570  417373 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-579393 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: exit status 80 (8m24.103899445s)

                                                
                                                
-- stdout --
	* [ha-579393] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21409
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21409-413763/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-413763/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "ha-579393" primary control-plane node in "ha-579393" cluster
	* Pulling base image v0.0.48-1759745255-21703 ...
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 20:03:17.125360  470544 out.go:360] Setting OutFile to fd 1 ...
	I1014 20:03:17.125666  470544 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:03:17.125678  470544 out.go:374] Setting ErrFile to fd 2...
	I1014 20:03:17.125685  470544 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:03:17.125940  470544 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-413763/.minikube/bin
	I1014 20:03:17.126490  470544 out.go:368] Setting JSON to false
	I1014 20:03:17.127467  470544 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":9943,"bootTime":1760462254,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1014 20:03:17.127588  470544 start.go:141] virtualization: kvm guest
	I1014 20:03:17.129767  470544 out.go:179] * [ha-579393] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1014 20:03:17.131241  470544 out.go:179]   - MINIKUBE_LOCATION=21409
	I1014 20:03:17.131264  470544 notify.go:220] Checking for updates...
	I1014 20:03:17.134306  470544 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 20:03:17.135806  470544 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-413763/kubeconfig
	I1014 20:03:17.137119  470544 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-413763/.minikube
	I1014 20:03:17.138379  470544 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1014 20:03:17.140082  470544 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 20:03:17.141662  470544 driver.go:421] Setting default libvirt URI to qemu:///system
	I1014 20:03:17.165916  470544 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1014 20:03:17.166098  470544 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 20:03:17.229548  470544 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-14 20:03:17.218250431 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1014 20:03:17.229650  470544 docker.go:318] overlay module found
	I1014 20:03:17.231449  470544 out.go:179] * Using the docker driver based on user configuration
	I1014 20:03:17.232741  470544 start.go:305] selected driver: docker
	I1014 20:03:17.232773  470544 start.go:925] validating driver "docker" against <nil>
	I1014 20:03:17.232790  470544 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 20:03:17.233313  470544 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 20:03:17.295257  470544 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-14 20:03:17.284941769 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1014 20:03:17.295445  470544 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1014 20:03:17.295657  470544 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 20:03:17.297506  470544 out.go:179] * Using Docker driver with root privileges
	I1014 20:03:17.298873  470544 cni.go:84] Creating CNI manager for ""
	I1014 20:03:17.298932  470544 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1014 20:03:17.298947  470544 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1014 20:03:17.299040  470544 start.go:349] cluster config:
	{Name:ha-579393 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-579393 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPaus
eInterval:1m0s}
	I1014 20:03:17.300487  470544 out.go:179] * Starting "ha-579393" primary control-plane node in "ha-579393" cluster
	I1014 20:03:17.301710  470544 cache.go:123] Beginning downloading kic base image for docker with crio
	I1014 20:03:17.302965  470544 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1014 20:03:17.304134  470544 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 20:03:17.304173  470544 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-413763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1014 20:03:17.304183  470544 cache.go:58] Caching tarball of preloaded images
	I1014 20:03:17.304233  470544 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1014 20:03:17.304269  470544 preload.go:233] Found /home/jenkins/minikube-integration/21409-413763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1014 20:03:17.304279  470544 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1014 20:03:17.304557  470544 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/config.json ...
	I1014 20:03:17.304580  470544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/config.json: {Name:mk533f81ade9d1a5f526dccc10d22b964ab1abab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:03:17.326336  470544 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1014 20:03:17.326357  470544 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1014 20:03:17.326374  470544 cache.go:232] Successfully downloaded all kic artifacts
	I1014 20:03:17.326399  470544 start.go:360] acquireMachinesLock for ha-579393: {Name:mk1f05a584fb3418ecbc78031a467c0e06c7a892 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 20:03:17.327173  470544 start.go:364] duration metric: took 757.56µs to acquireMachinesLock for "ha-579393"
	I1014 20:03:17.327207  470544 start.go:93] Provisioning new machine with config: &{Name:ha-579393 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-579393 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 20:03:17.327266  470544 start.go:125] createHost starting for "" (driver="docker")
	I1014 20:03:17.329132  470544 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1014 20:03:17.329332  470544 start.go:159] libmachine.API.Create for "ha-579393" (driver="docker")
	I1014 20:03:17.329358  470544 client.go:168] LocalClient.Create starting
	I1014 20:03:17.329426  470544 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem
	I1014 20:03:17.329458  470544 main.go:141] libmachine: Decoding PEM data...
	I1014 20:03:17.329469  470544 main.go:141] libmachine: Parsing certificate...
	I1014 20:03:17.329531  470544 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem
	I1014 20:03:17.329556  470544 main.go:141] libmachine: Decoding PEM data...
	I1014 20:03:17.329563  470544 main.go:141] libmachine: Parsing certificate...
	I1014 20:03:17.329904  470544 cli_runner.go:164] Run: docker network inspect ha-579393 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1014 20:03:17.347467  470544 cli_runner.go:211] docker network inspect ha-579393 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1014 20:03:17.347535  470544 network_create.go:284] running [docker network inspect ha-579393] to gather additional debugging logs...
	I1014 20:03:17.347555  470544 cli_runner.go:164] Run: docker network inspect ha-579393
	W1014 20:03:17.364018  470544 cli_runner.go:211] docker network inspect ha-579393 returned with exit code 1
	I1014 20:03:17.364049  470544 network_create.go:287] error running [docker network inspect ha-579393]: docker network inspect ha-579393: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-579393 not found
	I1014 20:03:17.364062  470544 network_create.go:289] output of [docker network inspect ha-579393]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-579393 not found
	
	** /stderr **
	I1014 20:03:17.364179  470544 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1014 20:03:17.381335  470544 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001946000}
	I1014 20:03:17.381374  470544 network_create.go:124] attempt to create docker network ha-579393 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1014 20:03:17.381422  470544 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-579393 ha-579393
	I1014 20:03:17.438306  470544 network_create.go:108] docker network ha-579393 192.168.49.0/24 created
	I1014 20:03:17.438342  470544 kic.go:121] calculated static IP "192.168.49.2" for the "ha-579393" container
	I1014 20:03:17.438422  470544 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1014 20:03:17.455388  470544 cli_runner.go:164] Run: docker volume create ha-579393 --label name.minikube.sigs.k8s.io=ha-579393 --label created_by.minikube.sigs.k8s.io=true
	I1014 20:03:17.474494  470544 oci.go:103] Successfully created a docker volume ha-579393
	I1014 20:03:17.474585  470544 cli_runner.go:164] Run: docker run --rm --name ha-579393-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-579393 --entrypoint /usr/bin/test -v ha-579393:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1014 20:03:17.868197  470544 oci.go:107] Successfully prepared a docker volume ha-579393
	I1014 20:03:17.868264  470544 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 20:03:17.868291  470544 kic.go:194] Starting extracting preloaded images to volume ...
	I1014 20:03:17.868380  470544 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21409-413763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-579393:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	I1014 20:03:22.341626  470544 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21409-413763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-579393:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (4.473193247s)
	I1014 20:03:22.341663  470544 kic.go:203] duration metric: took 4.47336734s to extract preloaded images to volume ...
	W1014 20:03:22.341815  470544 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1014 20:03:22.341863  470544 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1014 20:03:22.341916  470544 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1014 20:03:22.400050  470544 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-579393 --name ha-579393 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-579393 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-579393 --network ha-579393 --ip 192.168.49.2 --volume ha-579393:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1014 20:03:22.677726  470544 cli_runner.go:164] Run: docker container inspect ha-579393 --format={{.State.Running}}
	I1014 20:03:22.696026  470544 cli_runner.go:164] Run: docker container inspect ha-579393 --format={{.State.Status}}
	I1014 20:03:22.715378  470544 cli_runner.go:164] Run: docker exec ha-579393 stat /var/lib/dpkg/alternatives/iptables
	I1014 20:03:22.762223  470544 oci.go:144] the created container "ha-579393" has a running status.
	I1014 20:03:22.762255  470544 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa...
	I1014 20:03:22.820780  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1014 20:03:22.820832  470544 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1014 20:03:22.850515  470544 cli_runner.go:164] Run: docker container inspect ha-579393 --format={{.State.Status}}
	I1014 20:03:22.870190  470544 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1014 20:03:22.870210  470544 kic_runner.go:114] Args: [docker exec --privileged ha-579393 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1014 20:03:22.912447  470544 cli_runner.go:164] Run: docker container inspect ha-579393 --format={{.State.Status}}
	I1014 20:03:22.934356  470544 machine.go:93] provisionDockerMachine start ...
	I1014 20:03:22.934472  470544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:03:22.954394  470544 main.go:141] libmachine: Using SSH client type: native
	I1014 20:03:22.954768  470544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32903 <nil> <nil>}
	I1014 20:03:22.954796  470544 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 20:03:22.955439  470544 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50642->127.0.0.1:32903: read: connection reset by peer
	I1014 20:03:26.104260  470544 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-579393
	
	I1014 20:03:26.104298  470544 ubuntu.go:182] provisioning hostname "ha-579393"
	I1014 20:03:26.104379  470544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:03:26.122921  470544 main.go:141] libmachine: Using SSH client type: native
	I1014 20:03:26.123167  470544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32903 <nil> <nil>}
	I1014 20:03:26.123185  470544 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-579393 && echo "ha-579393" | sudo tee /etc/hostname
	I1014 20:03:26.281180  470544 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-579393
	
	I1014 20:03:26.281286  470544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:03:26.299367  470544 main.go:141] libmachine: Using SSH client type: native
	I1014 20:03:26.299579  470544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32903 <nil> <nil>}
	I1014 20:03:26.299596  470544 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-579393' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-579393/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-579393' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 20:03:26.445909  470544 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 20:03:26.445941  470544 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-413763/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-413763/.minikube}
	I1014 20:03:26.445960  470544 ubuntu.go:190] setting up certificates
	I1014 20:03:26.445974  470544 provision.go:84] configureAuth start
	I1014 20:03:26.446042  470544 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-579393
	I1014 20:03:26.463014  470544 provision.go:143] copyHostCerts
	I1014 20:03:26.463059  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem
	I1014 20:03:26.463090  470544 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem, removing ...
	I1014 20:03:26.463099  470544 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem
	I1014 20:03:26.463169  470544 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem (1078 bytes)
	I1014 20:03:26.463255  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem
	I1014 20:03:26.463272  470544 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem, removing ...
	I1014 20:03:26.463279  470544 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem
	I1014 20:03:26.463304  470544 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem (1123 bytes)
	I1014 20:03:26.463350  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem
	I1014 20:03:26.463367  470544 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem, removing ...
	I1014 20:03:26.463373  470544 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem
	I1014 20:03:26.463396  470544 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem (1675 bytes)
	I1014 20:03:26.463447  470544 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca-key.pem org=jenkins.ha-579393 san=[127.0.0.1 192.168.49.2 ha-579393 localhost minikube]
	I1014 20:03:26.617910  470544 provision.go:177] copyRemoteCerts
	I1014 20:03:26.617976  470544 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 20:03:26.618022  470544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:03:26.636120  470544 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:03:26.739380  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1014 20:03:26.739452  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 20:03:26.759232  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1014 20:03:26.759293  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1014 20:03:26.778271  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1014 20:03:26.778338  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1014 20:03:26.796388  470544 provision.go:87] duration metric: took 350.39932ms to configureAuth
	I1014 20:03:26.796420  470544 ubuntu.go:206] setting minikube options for container-runtime
	I1014 20:03:26.796596  470544 config.go:182] Loaded profile config "ha-579393": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:03:26.796705  470544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:03:26.816035  470544 main.go:141] libmachine: Using SSH client type: native
	I1014 20:03:26.816243  470544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32903 <nil> <nil>}
	I1014 20:03:26.816259  470544 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 20:03:27.082126  470544 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 20:03:27.082156  470544 machine.go:96] duration metric: took 4.147772563s to provisionDockerMachine
	I1014 20:03:27.082171  470544 client.go:171] duration metric: took 9.752806403s to LocalClient.Create
	I1014 20:03:27.082197  470544 start.go:167] duration metric: took 9.752866506s to libmachine.API.Create "ha-579393"
	I1014 20:03:27.082205  470544 start.go:293] postStartSetup for "ha-579393" (driver="docker")
	I1014 20:03:27.082215  470544 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 20:03:27.082274  470544 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 20:03:27.082316  470544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:03:27.101460  470544 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:03:27.208078  470544 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 20:03:27.212053  470544 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1014 20:03:27.212086  470544 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1014 20:03:27.212100  470544 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-413763/.minikube/addons for local assets ...
	I1014 20:03:27.212182  470544 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-413763/.minikube/files for local assets ...
	I1014 20:03:27.212277  470544 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem -> 4173732.pem in /etc/ssl/certs
	I1014 20:03:27.212288  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem -> /etc/ssl/certs/4173732.pem
	I1014 20:03:27.212396  470544 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 20:03:27.220472  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem --> /etc/ssl/certs/4173732.pem (1708 bytes)
	I1014 20:03:27.241576  470544 start.go:296] duration metric: took 159.355524ms for postStartSetup
	I1014 20:03:27.241976  470544 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-579393
	I1014 20:03:27.259468  470544 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/config.json ...
	I1014 20:03:27.259849  470544 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1014 20:03:27.259907  470544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:03:27.277799  470544 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:03:27.378323  470544 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1014 20:03:27.383519  470544 start.go:128] duration metric: took 10.056234444s to createHost
	I1014 20:03:27.383548  470544 start.go:83] releasing machines lock for "ha-579393", held for 10.056356237s
	I1014 20:03:27.383629  470544 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-579393
	I1014 20:03:27.401699  470544 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 20:03:27.401709  470544 ssh_runner.go:195] Run: cat /version.json
	I1014 20:03:27.401815  470544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:03:27.401838  470544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:03:27.420176  470544 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:03:27.421057  470544 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:03:27.574708  470544 ssh_runner.go:195] Run: systemctl --version
	I1014 20:03:27.581776  470544 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 20:03:27.618049  470544 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 20:03:27.622981  470544 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 20:03:27.623059  470544 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 20:03:27.650696  470544 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1014 20:03:27.650726  470544 start.go:495] detecting cgroup driver to use...
	I1014 20:03:27.650795  470544 detect.go:190] detected "systemd" cgroup driver on host os
	I1014 20:03:27.650860  470544 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 20:03:27.668397  470544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 20:03:27.681391  470544 docker.go:218] disabling cri-docker service (if available) ...
	I1014 20:03:27.681446  470544 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 20:03:27.698246  470544 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 20:03:27.716479  470544 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 20:03:27.798818  470544 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 20:03:27.884317  470544 docker.go:234] disabling docker service ...
	I1014 20:03:27.884384  470544 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 20:03:27.905126  470544 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 20:03:27.918827  470544 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 20:03:28.002081  470544 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 20:03:28.084842  470544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 20:03:28.098220  470544 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 20:03:28.113305  470544 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1014 20:03:28.113364  470544 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:03:28.124477  470544 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1014 20:03:28.124559  470544 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:03:28.134261  470544 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:03:28.144071  470544 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:03:28.154359  470544 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 20:03:28.163636  470544 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:03:28.173644  470544 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:03:28.188326  470544 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:03:28.198228  470544 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 20:03:28.206234  470544 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 20:03:28.214019  470544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 20:03:28.295010  470544 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 20:03:28.401206  470544 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 20:03:28.401272  470544 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 20:03:28.405522  470544 start.go:563] Will wait 60s for crictl version
	I1014 20:03:28.405585  470544 ssh_runner.go:195] Run: which crictl
	I1014 20:03:28.409373  470544 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1014 20:03:28.435266  470544 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1014 20:03:28.435335  470544 ssh_runner.go:195] Run: crio --version
	I1014 20:03:28.465834  470544 ssh_runner.go:195] Run: crio --version
	I1014 20:03:28.497274  470544 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1014 20:03:28.498593  470544 cli_runner.go:164] Run: docker network inspect ha-579393 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1014 20:03:28.517029  470544 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1014 20:03:28.521498  470544 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 20:03:28.532817  470544 kubeadm.go:883] updating cluster {Name:ha-579393 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-579393 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 20:03:28.532940  470544 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 20:03:28.532992  470544 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 20:03:28.565925  470544 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 20:03:28.565951  470544 crio.go:433] Images already preloaded, skipping extraction
	I1014 20:03:28.566006  470544 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 20:03:28.592978  470544 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 20:03:28.593003  470544 cache_images.go:85] Images are preloaded, skipping loading
	I1014 20:03:28.593011  470544 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1014 20:03:28.593109  470544 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-579393 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-579393 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 20:03:28.593172  470544 ssh_runner.go:195] Run: crio config
	I1014 20:03:28.638570  470544 cni.go:84] Creating CNI manager for ""
	I1014 20:03:28.638590  470544 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1014 20:03:28.638604  470544 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1014 20:03:28.638626  470544 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-579393 NodeName:ha-579393 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1014 20:03:28.638736  470544 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-579393"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 20:03:28.638778  470544 kube-vip.go:115] generating kube-vip config ...
	I1014 20:03:28.638827  470544 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1014 20:03:28.651221  470544 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1014 20:03:28.651322  470544 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1014 20:03:28.651371  470544 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1014 20:03:28.659733  470544 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 20:03:28.659825  470544 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1014 20:03:28.667977  470544 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1014 20:03:28.681172  470544 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 20:03:28.697239  470544 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1014 20:03:28.710080  470544 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I1014 20:03:28.724688  470544 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1014 20:03:28.728568  470544 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 20:03:28.738656  470544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 20:03:28.817749  470544 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 20:03:28.841528  470544 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393 for IP: 192.168.49.2
	I1014 20:03:28.841566  470544 certs.go:195] generating shared ca certs ...
	I1014 20:03:28.841587  470544 certs.go:227] acquiring lock for ca certs: {Name:mk332cc83f1e1747534fc81e896bed94f24941d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:03:28.841727  470544 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.key
	I1014 20:03:28.841805  470544 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.key
	I1014 20:03:28.841821  470544 certs.go:257] generating profile certs ...
	I1014 20:03:28.841874  470544 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/client.key
	I1014 20:03:28.841897  470544 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/client.crt with IP's: []
	I1014 20:03:29.018063  470544 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/client.crt ...
	I1014 20:03:29.018101  470544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/client.crt: {Name:mk8b90bc05b294b6c05e808012d45472c3093f67 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:03:29.018299  470544 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/client.key ...
	I1014 20:03:29.018321  470544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/client.key: {Name:mk4670db425ebf46f3bf4968573343a975480683 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:03:29.018407  470544 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key.6d4a1612
	I1014 20:03:29.018424  470544 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt.6d4a1612 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I1014 20:03:29.208082  470544 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt.6d4a1612 ...
	I1014 20:03:29.208118  470544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt.6d4a1612: {Name:mk2e48e06bd7a0fd2aa3ea9def795ac03bded956 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:03:29.208287  470544 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key.6d4a1612 ...
	I1014 20:03:29.208300  470544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key.6d4a1612: {Name:mkc6fe9b4a3330b4fa61a71beeb137e948294421 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:03:29.209199  470544 certs.go:382] copying /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt.6d4a1612 -> /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt
	I1014 20:03:29.209315  470544 certs.go:386] copying /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key.6d4a1612 -> /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key
	I1014 20:03:29.209373  470544 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.key
	I1014 20:03:29.209389  470544 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.crt with IP's: []
	I1014 20:03:29.349734  470544 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.crt ...
	I1014 20:03:29.349788  470544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.crt: {Name:mk3c38e66fa21f9bf9f031b0b611fbb1d8c4882a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:03:29.349962  470544 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.key ...
	I1014 20:03:29.349973  470544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.key: {Name:mke28e4de33c7a0d50feb0b1335c5cd9e94d81db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:03:29.350047  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1014 20:03:29.350064  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1014 20:03:29.350075  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1014 20:03:29.350087  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1014 20:03:29.350099  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1014 20:03:29.350109  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1014 20:03:29.350122  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1014 20:03:29.350132  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1014 20:03:29.350183  470544 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/417373.pem (1338 bytes)
	W1014 20:03:29.350228  470544 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-413763/.minikube/certs/417373_empty.pem, impossibly tiny 0 bytes
	I1014 20:03:29.350237  470544 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca-key.pem (1675 bytes)
	I1014 20:03:29.350258  470544 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem (1078 bytes)
	I1014 20:03:29.350280  470544 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem (1123 bytes)
	I1014 20:03:29.350300  470544 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem (1675 bytes)
	I1014 20:03:29.350336  470544 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem (1708 bytes)
	I1014 20:03:29.350360  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:03:29.350373  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/417373.pem -> /usr/share/ca-certificates/417373.pem
	I1014 20:03:29.350387  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem -> /usr/share/ca-certificates/4173732.pem
	I1014 20:03:29.350927  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 20:03:29.369482  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1014 20:03:29.386797  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 20:03:29.404413  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1014 20:03:29.421955  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1014 20:03:29.439808  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1014 20:03:29.457222  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 20:03:29.475143  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1014 20:03:29.493300  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 20:03:29.513957  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/certs/417373.pem --> /usr/share/ca-certificates/417373.pem (1338 bytes)
	I1014 20:03:29.535163  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem --> /usr/share/ca-certificates/4173732.pem (1708 bytes)
	I1014 20:03:29.554358  470544 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 20:03:29.567671  470544 ssh_runner.go:195] Run: openssl version
	I1014 20:03:29.574116  470544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4173732.pem && ln -fs /usr/share/ca-certificates/4173732.pem /etc/ssl/certs/4173732.pem"
	I1014 20:03:29.582980  470544 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4173732.pem
	I1014 20:03:29.586713  470544 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 19:32 /usr/share/ca-certificates/4173732.pem
	I1014 20:03:29.586836  470544 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4173732.pem
	I1014 20:03:29.620973  470544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4173732.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 20:03:29.629990  470544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 20:03:29.638580  470544 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:03:29.642541  470544 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 19:15 /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:03:29.642595  470544 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:03:29.677097  470544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 20:03:29.687002  470544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/417373.pem && ln -fs /usr/share/ca-certificates/417373.pem /etc/ssl/certs/417373.pem"
	I1014 20:03:29.696267  470544 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/417373.pem
	I1014 20:03:29.700535  470544 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 19:32 /usr/share/ca-certificates/417373.pem
	I1014 20:03:29.700593  470544 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/417373.pem
	I1014 20:03:29.734895  470544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/417373.pem /etc/ssl/certs/51391683.0"
	I1014 20:03:29.744295  470544 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 20:03:29.748240  470544 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1014 20:03:29.748305  470544 kubeadm.go:400] StartCluster: {Name:ha-579393 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-579393 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 20:03:29.748380  470544 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 20:03:29.748448  470544 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 20:03:29.777054  470544 cri.go:89] found id: ""
	I1014 20:03:29.777134  470544 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 20:03:29.785507  470544 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 20:03:29.793651  470544 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1014 20:03:29.793711  470544 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 20:03:29.801881  470544 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 20:03:29.801906  470544 kubeadm.go:157] found existing configuration files:
	
	I1014 20:03:29.801956  470544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 20:03:29.809948  470544 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 20:03:29.810011  470544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 20:03:29.817979  470544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 20:03:29.825985  470544 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 20:03:29.826064  470544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 20:03:29.833833  470544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 20:03:29.842078  470544 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 20:03:29.842149  470544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 20:03:29.850122  470544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 20:03:29.858250  470544 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 20:03:29.858312  470544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 20:03:29.866004  470544 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1014 20:03:29.905901  470544 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1014 20:03:29.906013  470544 kubeadm.go:318] [preflight] Running pre-flight checks
	I1014 20:03:29.928412  470544 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1014 20:03:29.928498  470544 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1014 20:03:29.928541  470544 kubeadm.go:318] OS: Linux
	I1014 20:03:29.928583  470544 kubeadm.go:318] CGROUPS_CPU: enabled
	I1014 20:03:29.928652  470544 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1014 20:03:29.928730  470544 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1014 20:03:29.928805  470544 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1014 20:03:29.928849  470544 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1014 20:03:29.928892  470544 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1014 20:03:29.928935  470544 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1014 20:03:29.928973  470544 kubeadm.go:318] CGROUPS_IO: enabled
	I1014 20:03:29.989181  470544 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1014 20:03:29.989342  470544 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1014 20:03:29.989457  470544 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1014 20:03:29.997476  470544 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1014 20:03:30.000428  470544 out.go:252]   - Generating certificates and keys ...
	I1014 20:03:30.000531  470544 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1014 20:03:30.000656  470544 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1014 20:03:30.367367  470544 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1014 20:03:30.888441  470544 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1014 20:03:31.416284  470544 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1014 20:03:31.486302  470544 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1014 20:03:32.293304  470544 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1014 20:03:32.293457  470544 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [ha-579393 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1014 20:03:32.436942  470544 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1014 20:03:32.437134  470544 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [ha-579393 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1014 20:03:32.740861  470544 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1014 20:03:32.874202  470544 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1014 20:03:33.330864  470544 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1014 20:03:33.330961  470544 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1014 20:03:33.434687  470544 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1014 20:03:33.590351  470544 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1014 20:03:33.928031  470544 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1014 20:03:34.042691  470544 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1014 20:03:34.576186  470544 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1014 20:03:34.576637  470544 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1014 20:03:34.579016  470544 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1014 20:03:34.581440  470544 out.go:252]   - Booting up control plane ...
	I1014 20:03:34.581593  470544 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1014 20:03:34.581712  470544 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1014 20:03:34.581832  470544 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1014 20:03:34.595204  470544 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1014 20:03:34.595404  470544 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1014 20:03:34.601919  470544 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1014 20:03:34.602142  470544 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1014 20:03:34.602243  470544 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1014 20:03:34.699612  470544 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1014 20:03:34.699737  470544 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1014 20:03:35.200483  470544 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 500.970561ms
	I1014 20:03:35.205501  470544 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1014 20:03:35.205636  470544 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1014 20:03:35.205873  470544 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1014 20:03:35.205987  470544 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1014 20:07:35.206930  470544 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000676337s
	I1014 20:07:35.207172  470544 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000720088s
	I1014 20:07:35.207359  470544 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000743417s
	I1014 20:07:35.207371  470544 kubeadm.go:318] 
	I1014 20:07:35.207694  470544 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1014 20:07:35.208049  470544 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1014 20:07:35.208276  470544 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1014 20:07:35.208532  470544 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1014 20:07:35.208786  470544 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1014 20:07:35.209074  470544 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1014 20:07:35.209100  470544 kubeadm.go:318] 
	I1014 20:07:35.211976  470544 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1014 20:07:35.212145  470544 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1014 20:07:35.212706  470544 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1014 20:07:35.212843  470544 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	W1014 20:07:35.212972  470544 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-579393 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-579393 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 500.970561ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000676337s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000720088s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000743417s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-579393 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-579393 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 500.970561ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000676337s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000720088s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000743417s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1014 20:07:35.213050  470544 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1014 20:07:37.966951  470544 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.753873381s)
	I1014 20:07:37.967030  470544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 20:07:37.980538  470544 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1014 20:07:37.980613  470544 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 20:07:37.988822  470544 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 20:07:37.988844  470544 kubeadm.go:157] found existing configuration files:
	
	I1014 20:07:37.988897  470544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 20:07:37.996970  470544 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 20:07:37.997051  470544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 20:07:38.004797  470544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 20:07:38.012635  470544 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 20:07:38.012702  470544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 20:07:38.020175  470544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 20:07:38.028386  470544 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 20:07:38.028440  470544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 20:07:38.036154  470544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 20:07:38.044027  470544 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 20:07:38.044088  470544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 20:07:38.051422  470544 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1014 20:07:38.110505  470544 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1014 20:07:38.170186  470544 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1014 20:11:40.721242  470544 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1014 20:11:40.721491  470544 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1014 20:11:40.724650  470544 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1014 20:11:40.724789  470544 kubeadm.go:318] [preflight] Running pre-flight checks
	I1014 20:11:40.724937  470544 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1014 20:11:40.725018  470544 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1014 20:11:40.725068  470544 kubeadm.go:318] OS: Linux
	I1014 20:11:40.725125  470544 kubeadm.go:318] CGROUPS_CPU: enabled
	I1014 20:11:40.725181  470544 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1014 20:11:40.725248  470544 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1014 20:11:40.725310  470544 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1014 20:11:40.725365  470544 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1014 20:11:40.725423  470544 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1014 20:11:40.725473  470544 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1014 20:11:40.725534  470544 kubeadm.go:318] CGROUPS_IO: enabled
	I1014 20:11:40.725639  470544 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1014 20:11:40.725782  470544 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1014 20:11:40.725977  470544 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1014 20:11:40.726087  470544 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1014 20:11:40.728584  470544 out.go:252]   - Generating certificates and keys ...
	I1014 20:11:40.728668  470544 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1014 20:11:40.728723  470544 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1014 20:11:40.728820  470544 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1014 20:11:40.728895  470544 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1014 20:11:40.728974  470544 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1014 20:11:40.729051  470544 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1014 20:11:40.729150  470544 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1014 20:11:40.729214  470544 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1014 20:11:40.729282  470544 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1014 20:11:40.729340  470544 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1014 20:11:40.729378  470544 kubeadm.go:318] [certs] Using the existing "sa" key
	I1014 20:11:40.729422  470544 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1014 20:11:40.729466  470544 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1014 20:11:40.729531  470544 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1014 20:11:40.729604  470544 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1014 20:11:40.729710  470544 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1014 20:11:40.729805  470544 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1014 20:11:40.729913  470544 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1014 20:11:40.730020  470544 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1014 20:11:40.731279  470544 out.go:252]   - Booting up control plane ...
	I1014 20:11:40.731376  470544 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1014 20:11:40.731472  470544 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1014 20:11:40.731563  470544 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1014 20:11:40.731676  470544 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1014 20:11:40.731820  470544 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1014 20:11:40.731960  470544 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1014 20:11:40.732060  470544 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1014 20:11:40.732099  470544 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1014 20:11:40.732241  470544 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1014 20:11:40.732368  470544 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1014 20:11:40.732459  470544 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001170855s
	I1014 20:11:40.732550  470544 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1014 20:11:40.732649  470544 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1014 20:11:40.732789  470544 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1014 20:11:40.732875  470544 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1014 20:11:40.732961  470544 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000981646s
	I1014 20:11:40.733076  470544 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001242346s
	I1014 20:11:40.733142  470544 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001205382s
	I1014 20:11:40.733157  470544 kubeadm.go:318] 
	I1014 20:11:40.733272  470544 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1014 20:11:40.733349  470544 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1014 20:11:40.733417  470544 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1014 20:11:40.733491  470544 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1014 20:11:40.733553  470544 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1014 20:11:40.733641  470544 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1014 20:11:40.733684  470544 kubeadm.go:318] 
	I1014 20:11:40.733748  470544 kubeadm.go:402] duration metric: took 8m10.985445817s to StartCluster
	I1014 20:11:40.733824  470544 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 20:11:40.733881  470544 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 20:11:40.762474  470544 cri.go:89] found id: ""
	I1014 20:11:40.762524  470544 logs.go:282] 0 containers: []
	W1014 20:11:40.762538  470544 logs.go:284] No container was found matching "kube-apiserver"
	I1014 20:11:40.762545  470544 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 20:11:40.762602  470544 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 20:11:40.789961  470544 cri.go:89] found id: ""
	I1014 20:11:40.789989  470544 logs.go:282] 0 containers: []
	W1014 20:11:40.789999  470544 logs.go:284] No container was found matching "etcd"
	I1014 20:11:40.790007  470544 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 20:11:40.790062  470544 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 20:11:40.817095  470544 cri.go:89] found id: ""
	I1014 20:11:40.817128  470544 logs.go:282] 0 containers: []
	W1014 20:11:40.817141  470544 logs.go:284] No container was found matching "coredns"
	I1014 20:11:40.817148  470544 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 20:11:40.817206  470544 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 20:11:40.843942  470544 cri.go:89] found id: ""
	I1014 20:11:40.843974  470544 logs.go:282] 0 containers: []
	W1014 20:11:40.843984  470544 logs.go:284] No container was found matching "kube-scheduler"
	I1014 20:11:40.843991  470544 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 20:11:40.844054  470544 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 20:11:40.870262  470544 cri.go:89] found id: ""
	I1014 20:11:40.870289  470544 logs.go:282] 0 containers: []
	W1014 20:11:40.870299  470544 logs.go:284] No container was found matching "kube-proxy"
	I1014 20:11:40.870308  470544 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 20:11:40.870377  470544 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 20:11:40.896558  470544 cri.go:89] found id: ""
	I1014 20:11:40.896588  470544 logs.go:282] 0 containers: []
	W1014 20:11:40.896597  470544 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 20:11:40.896604  470544 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 20:11:40.896660  470544 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 20:11:40.923171  470544 cri.go:89] found id: ""
	I1014 20:11:40.923202  470544 logs.go:282] 0 containers: []
	W1014 20:11:40.923214  470544 logs.go:284] No container was found matching "kindnet"
	I1014 20:11:40.923225  470544 logs.go:123] Gathering logs for kubelet ...
	I1014 20:11:40.923237  470544 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 20:11:40.991897  470544 logs.go:123] Gathering logs for dmesg ...
	I1014 20:11:40.991944  470544 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 20:11:41.010371  470544 logs.go:123] Gathering logs for describe nodes ...
	I1014 20:11:41.010404  470544 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 20:11:41.071387  470544 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 20:11:41.064404    2582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:11:41.064977    2582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:11:41.066171    2582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:11:41.066648    2582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:11:41.068287    2582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 20:11:41.064404    2582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:11:41.064977    2582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:11:41.066171    2582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:11:41.066648    2582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:11:41.068287    2582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 20:11:41.071407  470544 logs.go:123] Gathering logs for CRI-O ...
	I1014 20:11:41.071419  470544 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 20:11:41.133347  470544 logs.go:123] Gathering logs for container status ...
	I1014 20:11:41.133392  470544 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1014 20:11:41.166639  470544 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001170855s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000981646s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001242346s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001205382s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1014 20:11:41.166697  470544 out.go:285] * 
	* 
	W1014 20:11:41.166793  470544 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001170855s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000981646s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001242346s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001205382s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001170855s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000981646s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001242346s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001205382s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1014 20:11:41.166813  470544 out.go:285] * 
	* 
	W1014 20:11:41.168436  470544 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 20:11:41.172303  470544 out.go:203] 
	W1014 20:11:41.173765  470544 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001170855s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000981646s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001242346s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001205382s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001170855s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000981646s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001242346s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001205382s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1014 20:11:41.173801  470544 out.go:285] * 
	* 
	I1014 20:11:41.176311  470544 out.go:203] 

                                                
                                                
** /stderr **
ha_test.go:103: failed to fresh-start ha (multi-control plane) cluster. args "out/minikube-linux-amd64 -p ha-579393 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/StartCluster]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/StartCluster]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-579393
helpers_test.go:243: (dbg) docker inspect ha-579393:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2",
	        "Created": "2025-10-14T20:03:22.416166993Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 471115,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-14T20:03:22.453114626Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2/hostname",
	        "HostsPath": "/var/lib/docker/containers/e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2/hosts",
	        "LogPath": "/var/lib/docker/containers/e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2/e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2-json.log",
	        "Name": "/ha-579393",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-579393:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-579393",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2",
	                "LowerDir": "/var/lib/docker/overlay2/f2862f7f555e5163c5fb738e59e646f87e5b6f6dc38621bbd5cf8a3ccf2dc40d-init/diff:/var/lib/docker/overlay2/51cca15559205c73df9571f03495ed8a29f085405673af5fef2c1ba7c695d8af/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f2862f7f555e5163c5fb738e59e646f87e5b6f6dc38621bbd5cf8a3ccf2dc40d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f2862f7f555e5163c5fb738e59e646f87e5b6f6dc38621bbd5cf8a3ccf2dc40d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f2862f7f555e5163c5fb738e59e646f87e5b6f6dc38621bbd5cf8a3ccf2dc40d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-579393",
	                "Source": "/var/lib/docker/volumes/ha-579393/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-579393",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-579393",
	                "name.minikube.sigs.k8s.io": "ha-579393",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a1c2b52fae2ff440ed705eb97dd81ba6bb6415972c195c1ca3bec92d8e7f50f0",
	            "SandboxKey": "/var/run/docker/netns/a1c2b52fae2f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32903"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32904"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32907"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32905"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32906"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-579393": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "92:ce:80:cd:a9:2b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d686e445c44d566d468791372c74214f3cfc87adaf2fbedd68822ff7d3ce8955",
	                    "EndpointID": "9e96e7d478fc0073b7c8e78f8945763db207596a9030627a1780b04c90be2b93",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-579393",
	                        "e999454f4483"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-579393 -n ha-579393
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-579393 -n ha-579393: exit status 6 (303.918692ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1014 20:11:41.531999  475716 status.go:458] kubeconfig endpoint: get endpoint: "ha-579393" does not appear in /home/jenkins/minikube-integration/21409-413763/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/StartCluster FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/StartCluster]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-579393 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/StartCluster logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                              ARGS                                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ mount          │ -p functional-744288 --kill=true                                                                                                                                │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │                     │
	│ image          │ functional-744288 image save kicbase/echo-server:functional-744288 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ image          │ functional-744288 image rm kicbase/echo-server:functional-744288 --alsologtostderr                                                                              │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ image          │ functional-744288 image ls                                                                                                                                      │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ image          │ functional-744288 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr                                       │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ image          │ functional-744288 image save --daemon kicbase/echo-server:functional-744288 --alsologtostderr                                                                   │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ ssh            │ functional-744288 ssh sudo cat /etc/ssl/certs/417373.pem                                                                                                        │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ ssh            │ functional-744288 ssh sudo cat /etc/test/nested/copy/417373/hosts                                                                                               │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │                     │
	│ ssh            │ functional-744288 ssh sudo cat /usr/share/ca-certificates/417373.pem                                                                                            │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ ssh            │ functional-744288 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                        │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ ssh            │ functional-744288 ssh sudo cat /etc/ssl/certs/4173732.pem                                                                                                       │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ ssh            │ functional-744288 ssh sudo cat /usr/share/ca-certificates/4173732.pem                                                                                           │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ ssh            │ functional-744288 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                        │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ image          │ functional-744288 image ls --format short --alsologtostderr                                                                                                     │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ image          │ functional-744288 image ls --format json --alsologtostderr                                                                                                      │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ image          │ functional-744288 image ls --format table --alsologtostderr                                                                                                     │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ image          │ functional-744288 image ls --format yaml --alsologtostderr                                                                                                      │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ ssh            │ functional-744288 ssh pgrep buildkitd                                                                                                                           │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │                     │
	│ update-context │ functional-744288 update-context --alsologtostderr -v=2                                                                                                         │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ update-context │ functional-744288 update-context --alsologtostderr -v=2                                                                                                         │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ image          │ functional-744288 image build -t localhost/my-image:functional-744288 testdata/build --alsologtostderr                                                          │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ update-context │ functional-744288 update-context --alsologtostderr -v=2                                                                                                         │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ image          │ functional-744288 image ls                                                                                                                                      │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ delete         │ -p functional-744288                                                                                                                                            │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 20:03 UTC │ 14 Oct 25 20:03 UTC │
	│ start          │ ha-579393 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio                                                 │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:03 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/14 20:03:17
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1014 20:03:17.125360  470544 out.go:360] Setting OutFile to fd 1 ...
	I1014 20:03:17.125666  470544 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:03:17.125678  470544 out.go:374] Setting ErrFile to fd 2...
	I1014 20:03:17.125685  470544 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:03:17.125940  470544 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-413763/.minikube/bin
	I1014 20:03:17.126490  470544 out.go:368] Setting JSON to false
	I1014 20:03:17.127467  470544 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":9943,"bootTime":1760462254,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1014 20:03:17.127588  470544 start.go:141] virtualization: kvm guest
	I1014 20:03:17.129767  470544 out.go:179] * [ha-579393] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1014 20:03:17.131241  470544 out.go:179]   - MINIKUBE_LOCATION=21409
	I1014 20:03:17.131264  470544 notify.go:220] Checking for updates...
	I1014 20:03:17.134306  470544 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 20:03:17.135806  470544 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-413763/kubeconfig
	I1014 20:03:17.137119  470544 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-413763/.minikube
	I1014 20:03:17.138379  470544 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1014 20:03:17.140082  470544 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 20:03:17.141662  470544 driver.go:421] Setting default libvirt URI to qemu:///system
	I1014 20:03:17.165916  470544 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1014 20:03:17.166098  470544 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 20:03:17.229548  470544 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-14 20:03:17.218250431 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1014 20:03:17.229650  470544 docker.go:318] overlay module found
	I1014 20:03:17.231449  470544 out.go:179] * Using the docker driver based on user configuration
	I1014 20:03:17.232741  470544 start.go:305] selected driver: docker
	I1014 20:03:17.232773  470544 start.go:925] validating driver "docker" against <nil>
	I1014 20:03:17.232790  470544 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 20:03:17.233313  470544 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 20:03:17.295257  470544 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-14 20:03:17.284941769 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1014 20:03:17.295445  470544 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1014 20:03:17.295657  470544 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 20:03:17.297506  470544 out.go:179] * Using Docker driver with root privileges
	I1014 20:03:17.298873  470544 cni.go:84] Creating CNI manager for ""
	I1014 20:03:17.298932  470544 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1014 20:03:17.298947  470544 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1014 20:03:17.299040  470544 start.go:349] cluster config:
	{Name:ha-579393 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-579393 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPaus
eInterval:1m0s}
	I1014 20:03:17.300487  470544 out.go:179] * Starting "ha-579393" primary control-plane node in "ha-579393" cluster
	I1014 20:03:17.301710  470544 cache.go:123] Beginning downloading kic base image for docker with crio
	I1014 20:03:17.302965  470544 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1014 20:03:17.304134  470544 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 20:03:17.304173  470544 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-413763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1014 20:03:17.304183  470544 cache.go:58] Caching tarball of preloaded images
	I1014 20:03:17.304233  470544 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1014 20:03:17.304269  470544 preload.go:233] Found /home/jenkins/minikube-integration/21409-413763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1014 20:03:17.304279  470544 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1014 20:03:17.304557  470544 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/config.json ...
	I1014 20:03:17.304580  470544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/config.json: {Name:mk533f81ade9d1a5f526dccc10d22b964ab1abab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:03:17.326336  470544 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1014 20:03:17.326357  470544 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1014 20:03:17.326374  470544 cache.go:232] Successfully downloaded all kic artifacts
	I1014 20:03:17.326399  470544 start.go:360] acquireMachinesLock for ha-579393: {Name:mk1f05a584fb3418ecbc78031a467c0e06c7a892 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 20:03:17.327173  470544 start.go:364] duration metric: took 757.56µs to acquireMachinesLock for "ha-579393"
	I1014 20:03:17.327207  470544 start.go:93] Provisioning new machine with config: &{Name:ha-579393 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-579393 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 20:03:17.327266  470544 start.go:125] createHost starting for "" (driver="docker")
	I1014 20:03:17.329132  470544 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1014 20:03:17.329332  470544 start.go:159] libmachine.API.Create for "ha-579393" (driver="docker")
	I1014 20:03:17.329358  470544 client.go:168] LocalClient.Create starting
	I1014 20:03:17.329426  470544 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem
	I1014 20:03:17.329458  470544 main.go:141] libmachine: Decoding PEM data...
	I1014 20:03:17.329469  470544 main.go:141] libmachine: Parsing certificate...
	I1014 20:03:17.329531  470544 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem
	I1014 20:03:17.329556  470544 main.go:141] libmachine: Decoding PEM data...
	I1014 20:03:17.329563  470544 main.go:141] libmachine: Parsing certificate...
	I1014 20:03:17.329904  470544 cli_runner.go:164] Run: docker network inspect ha-579393 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1014 20:03:17.347467  470544 cli_runner.go:211] docker network inspect ha-579393 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1014 20:03:17.347535  470544 network_create.go:284] running [docker network inspect ha-579393] to gather additional debugging logs...
	I1014 20:03:17.347555  470544 cli_runner.go:164] Run: docker network inspect ha-579393
	W1014 20:03:17.364018  470544 cli_runner.go:211] docker network inspect ha-579393 returned with exit code 1
	I1014 20:03:17.364049  470544 network_create.go:287] error running [docker network inspect ha-579393]: docker network inspect ha-579393: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-579393 not found
	I1014 20:03:17.364062  470544 network_create.go:289] output of [docker network inspect ha-579393]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-579393 not found
	
	** /stderr **
	I1014 20:03:17.364179  470544 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1014 20:03:17.381335  470544 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001946000}
	I1014 20:03:17.381374  470544 network_create.go:124] attempt to create docker network ha-579393 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1014 20:03:17.381422  470544 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-579393 ha-579393
	I1014 20:03:17.438306  470544 network_create.go:108] docker network ha-579393 192.168.49.0/24 created
	I1014 20:03:17.438342  470544 kic.go:121] calculated static IP "192.168.49.2" for the "ha-579393" container
	I1014 20:03:17.438422  470544 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1014 20:03:17.455388  470544 cli_runner.go:164] Run: docker volume create ha-579393 --label name.minikube.sigs.k8s.io=ha-579393 --label created_by.minikube.sigs.k8s.io=true
	I1014 20:03:17.474494  470544 oci.go:103] Successfully created a docker volume ha-579393
	I1014 20:03:17.474585  470544 cli_runner.go:164] Run: docker run --rm --name ha-579393-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-579393 --entrypoint /usr/bin/test -v ha-579393:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1014 20:03:17.868197  470544 oci.go:107] Successfully prepared a docker volume ha-579393
	I1014 20:03:17.868264  470544 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 20:03:17.868291  470544 kic.go:194] Starting extracting preloaded images to volume ...
	I1014 20:03:17.868380  470544 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21409-413763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-579393:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	I1014 20:03:22.341626  470544 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21409-413763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-579393:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (4.473193247s)
	I1014 20:03:22.341663  470544 kic.go:203] duration metric: took 4.47336734s to extract preloaded images to volume ...
	W1014 20:03:22.341815  470544 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1014 20:03:22.341863  470544 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1014 20:03:22.341916  470544 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1014 20:03:22.400050  470544 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-579393 --name ha-579393 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-579393 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-579393 --network ha-579393 --ip 192.168.49.2 --volume ha-579393:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1014 20:03:22.677726  470544 cli_runner.go:164] Run: docker container inspect ha-579393 --format={{.State.Running}}
	I1014 20:03:22.696026  470544 cli_runner.go:164] Run: docker container inspect ha-579393 --format={{.State.Status}}
	I1014 20:03:22.715378  470544 cli_runner.go:164] Run: docker exec ha-579393 stat /var/lib/dpkg/alternatives/iptables
	I1014 20:03:22.762223  470544 oci.go:144] the created container "ha-579393" has a running status.
	I1014 20:03:22.762255  470544 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa...
	I1014 20:03:22.820780  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1014 20:03:22.820832  470544 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1014 20:03:22.850515  470544 cli_runner.go:164] Run: docker container inspect ha-579393 --format={{.State.Status}}
	I1014 20:03:22.870190  470544 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1014 20:03:22.870210  470544 kic_runner.go:114] Args: [docker exec --privileged ha-579393 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1014 20:03:22.912447  470544 cli_runner.go:164] Run: docker container inspect ha-579393 --format={{.State.Status}}
	I1014 20:03:22.934356  470544 machine.go:93] provisionDockerMachine start ...
	I1014 20:03:22.934472  470544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:03:22.954394  470544 main.go:141] libmachine: Using SSH client type: native
	I1014 20:03:22.954768  470544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32903 <nil> <nil>}
	I1014 20:03:22.954796  470544 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 20:03:22.955439  470544 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50642->127.0.0.1:32903: read: connection reset by peer
	I1014 20:03:26.104260  470544 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-579393
	
	I1014 20:03:26.104298  470544 ubuntu.go:182] provisioning hostname "ha-579393"
	I1014 20:03:26.104379  470544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:03:26.122921  470544 main.go:141] libmachine: Using SSH client type: native
	I1014 20:03:26.123167  470544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32903 <nil> <nil>}
	I1014 20:03:26.123185  470544 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-579393 && echo "ha-579393" | sudo tee /etc/hostname
	I1014 20:03:26.281180  470544 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-579393
	
	I1014 20:03:26.281286  470544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:03:26.299367  470544 main.go:141] libmachine: Using SSH client type: native
	I1014 20:03:26.299579  470544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32903 <nil> <nil>}
	I1014 20:03:26.299596  470544 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-579393' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-579393/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-579393' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 20:03:26.445909  470544 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 20:03:26.445941  470544 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-413763/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-413763/.minikube}
	I1014 20:03:26.445960  470544 ubuntu.go:190] setting up certificates
	I1014 20:03:26.445974  470544 provision.go:84] configureAuth start
	I1014 20:03:26.446042  470544 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-579393
	I1014 20:03:26.463014  470544 provision.go:143] copyHostCerts
	I1014 20:03:26.463059  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem
	I1014 20:03:26.463090  470544 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem, removing ...
	I1014 20:03:26.463099  470544 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem
	I1014 20:03:26.463169  470544 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem (1078 bytes)
	I1014 20:03:26.463255  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem
	I1014 20:03:26.463272  470544 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem, removing ...
	I1014 20:03:26.463279  470544 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem
	I1014 20:03:26.463304  470544 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem (1123 bytes)
	I1014 20:03:26.463350  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem
	I1014 20:03:26.463367  470544 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem, removing ...
	I1014 20:03:26.463373  470544 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem
	I1014 20:03:26.463396  470544 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem (1675 bytes)
	I1014 20:03:26.463447  470544 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca-key.pem org=jenkins.ha-579393 san=[127.0.0.1 192.168.49.2 ha-579393 localhost minikube]
	I1014 20:03:26.617910  470544 provision.go:177] copyRemoteCerts
	I1014 20:03:26.617976  470544 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 20:03:26.618022  470544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:03:26.636120  470544 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:03:26.739380  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1014 20:03:26.739452  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 20:03:26.759232  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1014 20:03:26.759293  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1014 20:03:26.778271  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1014 20:03:26.778338  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1014 20:03:26.796388  470544 provision.go:87] duration metric: took 350.39932ms to configureAuth
	I1014 20:03:26.796420  470544 ubuntu.go:206] setting minikube options for container-runtime
	I1014 20:03:26.796596  470544 config.go:182] Loaded profile config "ha-579393": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:03:26.796705  470544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:03:26.816035  470544 main.go:141] libmachine: Using SSH client type: native
	I1014 20:03:26.816243  470544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32903 <nil> <nil>}
	I1014 20:03:26.816259  470544 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 20:03:27.082126  470544 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 20:03:27.082156  470544 machine.go:96] duration metric: took 4.147772563s to provisionDockerMachine
	I1014 20:03:27.082171  470544 client.go:171] duration metric: took 9.752806403s to LocalClient.Create
	I1014 20:03:27.082197  470544 start.go:167] duration metric: took 9.752866506s to libmachine.API.Create "ha-579393"
	I1014 20:03:27.082205  470544 start.go:293] postStartSetup for "ha-579393" (driver="docker")
	I1014 20:03:27.082215  470544 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 20:03:27.082274  470544 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 20:03:27.082316  470544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:03:27.101460  470544 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:03:27.208078  470544 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 20:03:27.212053  470544 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1014 20:03:27.212086  470544 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1014 20:03:27.212100  470544 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-413763/.minikube/addons for local assets ...
	I1014 20:03:27.212182  470544 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-413763/.minikube/files for local assets ...
	I1014 20:03:27.212277  470544 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem -> 4173732.pem in /etc/ssl/certs
	I1014 20:03:27.212288  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem -> /etc/ssl/certs/4173732.pem
	I1014 20:03:27.212396  470544 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 20:03:27.220472  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem --> /etc/ssl/certs/4173732.pem (1708 bytes)
	I1014 20:03:27.241576  470544 start.go:296] duration metric: took 159.355524ms for postStartSetup
	I1014 20:03:27.241976  470544 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-579393
	I1014 20:03:27.259468  470544 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/config.json ...
	I1014 20:03:27.259849  470544 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1014 20:03:27.259907  470544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:03:27.277799  470544 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:03:27.378323  470544 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1014 20:03:27.383519  470544 start.go:128] duration metric: took 10.056234444s to createHost
	I1014 20:03:27.383548  470544 start.go:83] releasing machines lock for "ha-579393", held for 10.056356237s
	I1014 20:03:27.383629  470544 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-579393
	I1014 20:03:27.401699  470544 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 20:03:27.401709  470544 ssh_runner.go:195] Run: cat /version.json
	I1014 20:03:27.401815  470544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:03:27.401838  470544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:03:27.420176  470544 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:03:27.421057  470544 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:03:27.574708  470544 ssh_runner.go:195] Run: systemctl --version
	I1014 20:03:27.581776  470544 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 20:03:27.618049  470544 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 20:03:27.622981  470544 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 20:03:27.623059  470544 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 20:03:27.650696  470544 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1014 20:03:27.650726  470544 start.go:495] detecting cgroup driver to use...
	I1014 20:03:27.650795  470544 detect.go:190] detected "systemd" cgroup driver on host os
	I1014 20:03:27.650860  470544 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 20:03:27.668397  470544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 20:03:27.681391  470544 docker.go:218] disabling cri-docker service (if available) ...
	I1014 20:03:27.681446  470544 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 20:03:27.698246  470544 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 20:03:27.716479  470544 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 20:03:27.798818  470544 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 20:03:27.884317  470544 docker.go:234] disabling docker service ...
	I1014 20:03:27.884384  470544 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 20:03:27.905126  470544 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 20:03:27.918827  470544 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 20:03:28.002081  470544 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 20:03:28.084842  470544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 20:03:28.098220  470544 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 20:03:28.113305  470544 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1014 20:03:28.113364  470544 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:03:28.124477  470544 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1014 20:03:28.124559  470544 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:03:28.134261  470544 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:03:28.144071  470544 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:03:28.154359  470544 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 20:03:28.163636  470544 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:03:28.173644  470544 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:03:28.188326  470544 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:03:28.198228  470544 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 20:03:28.206234  470544 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 20:03:28.214019  470544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 20:03:28.295010  470544 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 20:03:28.401206  470544 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 20:03:28.401272  470544 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 20:03:28.405522  470544 start.go:563] Will wait 60s for crictl version
	I1014 20:03:28.405585  470544 ssh_runner.go:195] Run: which crictl
	I1014 20:03:28.409373  470544 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1014 20:03:28.435266  470544 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1014 20:03:28.435335  470544 ssh_runner.go:195] Run: crio --version
	I1014 20:03:28.465834  470544 ssh_runner.go:195] Run: crio --version
	I1014 20:03:28.497274  470544 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1014 20:03:28.498593  470544 cli_runner.go:164] Run: docker network inspect ha-579393 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1014 20:03:28.517029  470544 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1014 20:03:28.521498  470544 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 20:03:28.532817  470544 kubeadm.go:883] updating cluster {Name:ha-579393 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-579393 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 20:03:28.532940  470544 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 20:03:28.532992  470544 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 20:03:28.565925  470544 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 20:03:28.565951  470544 crio.go:433] Images already preloaded, skipping extraction
	I1014 20:03:28.566006  470544 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 20:03:28.592978  470544 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 20:03:28.593003  470544 cache_images.go:85] Images are preloaded, skipping loading
	I1014 20:03:28.593011  470544 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1014 20:03:28.593109  470544 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-579393 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-579393 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 20:03:28.593172  470544 ssh_runner.go:195] Run: crio config
	I1014 20:03:28.638570  470544 cni.go:84] Creating CNI manager for ""
	I1014 20:03:28.638590  470544 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1014 20:03:28.638604  470544 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1014 20:03:28.638626  470544 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-579393 NodeName:ha-579393 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1014 20:03:28.638736  470544 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-579393"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 20:03:28.638778  470544 kube-vip.go:115] generating kube-vip config ...
	I1014 20:03:28.638827  470544 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1014 20:03:28.651221  470544 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1014 20:03:28.651322  470544 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1014 20:03:28.651371  470544 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1014 20:03:28.659733  470544 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 20:03:28.659825  470544 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1014 20:03:28.667977  470544 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1014 20:03:28.681172  470544 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 20:03:28.697239  470544 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1014 20:03:28.710080  470544 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I1014 20:03:28.724688  470544 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1014 20:03:28.728568  470544 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 20:03:28.738656  470544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 20:03:28.817749  470544 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 20:03:28.841528  470544 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393 for IP: 192.168.49.2
	I1014 20:03:28.841566  470544 certs.go:195] generating shared ca certs ...
	I1014 20:03:28.841587  470544 certs.go:227] acquiring lock for ca certs: {Name:mk332cc83f1e1747534fc81e896bed94f24941d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:03:28.841727  470544 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.key
	I1014 20:03:28.841805  470544 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.key
	I1014 20:03:28.841821  470544 certs.go:257] generating profile certs ...
	I1014 20:03:28.841874  470544 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/client.key
	I1014 20:03:28.841897  470544 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/client.crt with IP's: []
	I1014 20:03:29.018063  470544 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/client.crt ...
	I1014 20:03:29.018101  470544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/client.crt: {Name:mk8b90bc05b294b6c05e808012d45472c3093f67 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:03:29.018299  470544 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/client.key ...
	I1014 20:03:29.018321  470544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/client.key: {Name:mk4670db425ebf46f3bf4968573343a975480683 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:03:29.018407  470544 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key.6d4a1612
	I1014 20:03:29.018424  470544 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt.6d4a1612 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I1014 20:03:29.208082  470544 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt.6d4a1612 ...
	I1014 20:03:29.208118  470544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt.6d4a1612: {Name:mk2e48e06bd7a0fd2aa3ea9def795ac03bded956 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:03:29.208287  470544 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key.6d4a1612 ...
	I1014 20:03:29.208300  470544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key.6d4a1612: {Name:mkc6fe9b4a3330b4fa61a71beeb137e948294421 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:03:29.209199  470544 certs.go:382] copying /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt.6d4a1612 -> /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt
	I1014 20:03:29.209315  470544 certs.go:386] copying /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key.6d4a1612 -> /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key
	I1014 20:03:29.209373  470544 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.key
	I1014 20:03:29.209389  470544 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.crt with IP's: []
	I1014 20:03:29.349734  470544 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.crt ...
	I1014 20:03:29.349788  470544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.crt: {Name:mk3c38e66fa21f9bf9f031b0b611fbb1d8c4882a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:03:29.349962  470544 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.key ...
	I1014 20:03:29.349973  470544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.key: {Name:mke28e4de33c7a0d50feb0b1335c5cd9e94d81db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:03:29.350047  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1014 20:03:29.350064  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1014 20:03:29.350075  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1014 20:03:29.350087  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1014 20:03:29.350099  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1014 20:03:29.350109  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1014 20:03:29.350122  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1014 20:03:29.350132  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1014 20:03:29.350183  470544 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/417373.pem (1338 bytes)
	W1014 20:03:29.350228  470544 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-413763/.minikube/certs/417373_empty.pem, impossibly tiny 0 bytes
	I1014 20:03:29.350237  470544 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca-key.pem (1675 bytes)
	I1014 20:03:29.350258  470544 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem (1078 bytes)
	I1014 20:03:29.350280  470544 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem (1123 bytes)
	I1014 20:03:29.350300  470544 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem (1675 bytes)
	I1014 20:03:29.350336  470544 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem (1708 bytes)
	I1014 20:03:29.350360  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:03:29.350373  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/417373.pem -> /usr/share/ca-certificates/417373.pem
	I1014 20:03:29.350387  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem -> /usr/share/ca-certificates/4173732.pem
	I1014 20:03:29.350927  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 20:03:29.369482  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1014 20:03:29.386797  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 20:03:29.404413  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1014 20:03:29.421955  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1014 20:03:29.439808  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1014 20:03:29.457222  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 20:03:29.475143  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1014 20:03:29.493300  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 20:03:29.513957  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/certs/417373.pem --> /usr/share/ca-certificates/417373.pem (1338 bytes)
	I1014 20:03:29.535163  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem --> /usr/share/ca-certificates/4173732.pem (1708 bytes)
	I1014 20:03:29.554358  470544 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 20:03:29.567671  470544 ssh_runner.go:195] Run: openssl version
	I1014 20:03:29.574116  470544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4173732.pem && ln -fs /usr/share/ca-certificates/4173732.pem /etc/ssl/certs/4173732.pem"
	I1014 20:03:29.582980  470544 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4173732.pem
	I1014 20:03:29.586713  470544 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 19:32 /usr/share/ca-certificates/4173732.pem
	I1014 20:03:29.586836  470544 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4173732.pem
	I1014 20:03:29.620973  470544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4173732.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 20:03:29.629990  470544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 20:03:29.638580  470544 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:03:29.642541  470544 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 19:15 /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:03:29.642595  470544 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:03:29.677097  470544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 20:03:29.687002  470544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/417373.pem && ln -fs /usr/share/ca-certificates/417373.pem /etc/ssl/certs/417373.pem"
	I1014 20:03:29.696267  470544 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/417373.pem
	I1014 20:03:29.700535  470544 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 19:32 /usr/share/ca-certificates/417373.pem
	I1014 20:03:29.700593  470544 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/417373.pem
	I1014 20:03:29.734895  470544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/417373.pem /etc/ssl/certs/51391683.0"
	I1014 20:03:29.744295  470544 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 20:03:29.748240  470544 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1014 20:03:29.748305  470544 kubeadm.go:400] StartCluster: {Name:ha-579393 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-579393 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 20:03:29.748380  470544 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 20:03:29.748448  470544 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 20:03:29.777054  470544 cri.go:89] found id: ""
	I1014 20:03:29.777134  470544 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 20:03:29.785507  470544 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 20:03:29.793651  470544 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1014 20:03:29.793711  470544 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 20:03:29.801881  470544 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 20:03:29.801906  470544 kubeadm.go:157] found existing configuration files:
	
	I1014 20:03:29.801956  470544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 20:03:29.809948  470544 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 20:03:29.810011  470544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 20:03:29.817979  470544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 20:03:29.825985  470544 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 20:03:29.826064  470544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 20:03:29.833833  470544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 20:03:29.842078  470544 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 20:03:29.842149  470544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 20:03:29.850122  470544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 20:03:29.858250  470544 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 20:03:29.858312  470544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 20:03:29.866004  470544 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1014 20:03:29.905901  470544 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1014 20:03:29.906013  470544 kubeadm.go:318] [preflight] Running pre-flight checks
	I1014 20:03:29.928412  470544 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1014 20:03:29.928498  470544 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1014 20:03:29.928541  470544 kubeadm.go:318] OS: Linux
	I1014 20:03:29.928583  470544 kubeadm.go:318] CGROUPS_CPU: enabled
	I1014 20:03:29.928652  470544 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1014 20:03:29.928730  470544 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1014 20:03:29.928805  470544 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1014 20:03:29.928849  470544 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1014 20:03:29.928892  470544 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1014 20:03:29.928935  470544 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1014 20:03:29.928973  470544 kubeadm.go:318] CGROUPS_IO: enabled
	I1014 20:03:29.989181  470544 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1014 20:03:29.989342  470544 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1014 20:03:29.989457  470544 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1014 20:03:29.997476  470544 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1014 20:03:30.000428  470544 out.go:252]   - Generating certificates and keys ...
	I1014 20:03:30.000531  470544 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1014 20:03:30.000656  470544 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1014 20:03:30.367367  470544 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1014 20:03:30.888441  470544 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1014 20:03:31.416284  470544 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1014 20:03:31.486302  470544 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1014 20:03:32.293304  470544 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1014 20:03:32.293457  470544 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [ha-579393 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1014 20:03:32.436942  470544 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1014 20:03:32.437134  470544 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [ha-579393 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1014 20:03:32.740861  470544 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1014 20:03:32.874202  470544 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1014 20:03:33.330864  470544 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1014 20:03:33.330961  470544 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1014 20:03:33.434687  470544 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1014 20:03:33.590351  470544 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1014 20:03:33.928031  470544 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1014 20:03:34.042691  470544 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1014 20:03:34.576186  470544 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1014 20:03:34.576637  470544 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1014 20:03:34.579016  470544 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1014 20:03:34.581440  470544 out.go:252]   - Booting up control plane ...
	I1014 20:03:34.581593  470544 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1014 20:03:34.581712  470544 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1014 20:03:34.581832  470544 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1014 20:03:34.595204  470544 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1014 20:03:34.595404  470544 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1014 20:03:34.601919  470544 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1014 20:03:34.602142  470544 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1014 20:03:34.602243  470544 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1014 20:03:34.699612  470544 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1014 20:03:34.699737  470544 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1014 20:03:35.200483  470544 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 500.970561ms
	I1014 20:03:35.205501  470544 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1014 20:03:35.205636  470544 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1014 20:03:35.205873  470544 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1014 20:03:35.205987  470544 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1014 20:07:35.206930  470544 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000676337s
	I1014 20:07:35.207172  470544 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000720088s
	I1014 20:07:35.207359  470544 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000743417s
	I1014 20:07:35.207371  470544 kubeadm.go:318] 
	I1014 20:07:35.207694  470544 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1014 20:07:35.208049  470544 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1014 20:07:35.208276  470544 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1014 20:07:35.208532  470544 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1014 20:07:35.208786  470544 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1014 20:07:35.209074  470544 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1014 20:07:35.209100  470544 kubeadm.go:318] 
	I1014 20:07:35.211976  470544 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1014 20:07:35.212145  470544 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1014 20:07:35.212706  470544 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1014 20:07:35.212843  470544 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	W1014 20:07:35.212972  470544 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-579393 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-579393 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 500.970561ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000676337s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000720088s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000743417s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1014 20:07:35.213050  470544 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1014 20:07:37.966951  470544 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.753873381s)
	I1014 20:07:37.967030  470544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 20:07:37.980538  470544 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1014 20:07:37.980613  470544 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 20:07:37.988822  470544 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 20:07:37.988844  470544 kubeadm.go:157] found existing configuration files:
	
	I1014 20:07:37.988897  470544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 20:07:37.996970  470544 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 20:07:37.997051  470544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 20:07:38.004797  470544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 20:07:38.012635  470544 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 20:07:38.012702  470544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 20:07:38.020175  470544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 20:07:38.028386  470544 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 20:07:38.028440  470544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 20:07:38.036154  470544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 20:07:38.044027  470544 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 20:07:38.044088  470544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 20:07:38.051422  470544 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1014 20:07:38.110505  470544 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1014 20:07:38.170186  470544 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1014 20:11:40.721242  470544 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1014 20:11:40.721491  470544 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1014 20:11:40.724650  470544 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1014 20:11:40.724789  470544 kubeadm.go:318] [preflight] Running pre-flight checks
	I1014 20:11:40.724937  470544 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1014 20:11:40.725018  470544 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1014 20:11:40.725068  470544 kubeadm.go:318] OS: Linux
	I1014 20:11:40.725125  470544 kubeadm.go:318] CGROUPS_CPU: enabled
	I1014 20:11:40.725181  470544 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1014 20:11:40.725248  470544 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1014 20:11:40.725310  470544 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1014 20:11:40.725365  470544 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1014 20:11:40.725423  470544 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1014 20:11:40.725473  470544 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1014 20:11:40.725534  470544 kubeadm.go:318] CGROUPS_IO: enabled
	I1014 20:11:40.725639  470544 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1014 20:11:40.725782  470544 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1014 20:11:40.725977  470544 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1014 20:11:40.726087  470544 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1014 20:11:40.728584  470544 out.go:252]   - Generating certificates and keys ...
	I1014 20:11:40.728668  470544 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1014 20:11:40.728723  470544 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1014 20:11:40.728820  470544 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1014 20:11:40.728895  470544 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1014 20:11:40.728974  470544 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1014 20:11:40.729051  470544 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1014 20:11:40.729150  470544 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1014 20:11:40.729214  470544 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1014 20:11:40.729282  470544 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1014 20:11:40.729340  470544 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1014 20:11:40.729378  470544 kubeadm.go:318] [certs] Using the existing "sa" key
	I1014 20:11:40.729422  470544 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1014 20:11:40.729466  470544 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1014 20:11:40.729531  470544 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1014 20:11:40.729604  470544 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1014 20:11:40.729710  470544 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1014 20:11:40.729805  470544 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1014 20:11:40.729913  470544 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1014 20:11:40.730020  470544 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1014 20:11:40.731279  470544 out.go:252]   - Booting up control plane ...
	I1014 20:11:40.731376  470544 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1014 20:11:40.731472  470544 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1014 20:11:40.731563  470544 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1014 20:11:40.731676  470544 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1014 20:11:40.731820  470544 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1014 20:11:40.731960  470544 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1014 20:11:40.732060  470544 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1014 20:11:40.732099  470544 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1014 20:11:40.732241  470544 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1014 20:11:40.732368  470544 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1014 20:11:40.732459  470544 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001170855s
	I1014 20:11:40.732550  470544 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1014 20:11:40.732649  470544 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1014 20:11:40.732789  470544 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1014 20:11:40.732875  470544 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1014 20:11:40.732961  470544 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000981646s
	I1014 20:11:40.733076  470544 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001242346s
	I1014 20:11:40.733142  470544 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001205382s
	I1014 20:11:40.733157  470544 kubeadm.go:318] 
	I1014 20:11:40.733272  470544 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1014 20:11:40.733349  470544 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1014 20:11:40.733417  470544 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1014 20:11:40.733491  470544 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1014 20:11:40.733553  470544 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1014 20:11:40.733641  470544 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1014 20:11:40.733684  470544 kubeadm.go:318] 
	I1014 20:11:40.733748  470544 kubeadm.go:402] duration metric: took 8m10.985445817s to StartCluster
	I1014 20:11:40.733824  470544 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 20:11:40.733881  470544 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 20:11:40.762474  470544 cri.go:89] found id: ""
	I1014 20:11:40.762524  470544 logs.go:282] 0 containers: []
	W1014 20:11:40.762538  470544 logs.go:284] No container was found matching "kube-apiserver"
	I1014 20:11:40.762545  470544 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 20:11:40.762602  470544 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 20:11:40.789961  470544 cri.go:89] found id: ""
	I1014 20:11:40.789989  470544 logs.go:282] 0 containers: []
	W1014 20:11:40.789999  470544 logs.go:284] No container was found matching "etcd"
	I1014 20:11:40.790007  470544 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 20:11:40.790062  470544 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 20:11:40.817095  470544 cri.go:89] found id: ""
	I1014 20:11:40.817128  470544 logs.go:282] 0 containers: []
	W1014 20:11:40.817141  470544 logs.go:284] No container was found matching "coredns"
	I1014 20:11:40.817148  470544 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 20:11:40.817206  470544 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 20:11:40.843942  470544 cri.go:89] found id: ""
	I1014 20:11:40.843974  470544 logs.go:282] 0 containers: []
	W1014 20:11:40.843984  470544 logs.go:284] No container was found matching "kube-scheduler"
	I1014 20:11:40.843991  470544 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 20:11:40.844054  470544 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 20:11:40.870262  470544 cri.go:89] found id: ""
	I1014 20:11:40.870289  470544 logs.go:282] 0 containers: []
	W1014 20:11:40.870299  470544 logs.go:284] No container was found matching "kube-proxy"
	I1014 20:11:40.870308  470544 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 20:11:40.870377  470544 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 20:11:40.896558  470544 cri.go:89] found id: ""
	I1014 20:11:40.896588  470544 logs.go:282] 0 containers: []
	W1014 20:11:40.896597  470544 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 20:11:40.896604  470544 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 20:11:40.896660  470544 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 20:11:40.923171  470544 cri.go:89] found id: ""
	I1014 20:11:40.923202  470544 logs.go:282] 0 containers: []
	W1014 20:11:40.923214  470544 logs.go:284] No container was found matching "kindnet"
	I1014 20:11:40.923225  470544 logs.go:123] Gathering logs for kubelet ...
	I1014 20:11:40.923237  470544 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 20:11:40.991897  470544 logs.go:123] Gathering logs for dmesg ...
	I1014 20:11:40.991944  470544 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 20:11:41.010371  470544 logs.go:123] Gathering logs for describe nodes ...
	I1014 20:11:41.010404  470544 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 20:11:41.071387  470544 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 20:11:41.064404    2582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:11:41.064977    2582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:11:41.066171    2582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:11:41.066648    2582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:11:41.068287    2582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 20:11:41.064404    2582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:11:41.064977    2582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:11:41.066171    2582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:11:41.066648    2582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:11:41.068287    2582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 20:11:41.071407  470544 logs.go:123] Gathering logs for CRI-O ...
	I1014 20:11:41.071419  470544 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 20:11:41.133347  470544 logs.go:123] Gathering logs for container status ...
	I1014 20:11:41.133392  470544 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1014 20:11:41.166639  470544 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001170855s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000981646s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001242346s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001205382s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1014 20:11:41.166697  470544 out.go:285] * 
	W1014 20:11:41.166793  470544 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001170855s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000981646s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001242346s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001205382s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1014 20:11:41.166813  470544 out.go:285] * 
	W1014 20:11:41.168436  470544 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 20:11:41.172303  470544 out.go:203] 
	W1014 20:11:41.173765  470544 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001170855s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000981646s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001242346s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001205382s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1014 20:11:41.173801  470544 out.go:285] * 
	I1014 20:11:41.176311  470544 out.go:203] 
	
	
	==> CRI-O <==
	Oct 14 20:11:38 ha-579393 crio[778]: time="2025-10-14T20:11:38.480209018Z" level=info msg="createCtr: removing container a78e502234fdb376f9e2aa0683b5461f136bcc61db4203aa081b5af4bdbbee19" id=9dcf1c8c-7722-4b80-ba50-d554e75ee511 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:11:38 ha-579393 crio[778]: time="2025-10-14T20:11:38.480241218Z" level=info msg="createCtr: deleting container a78e502234fdb376f9e2aa0683b5461f136bcc61db4203aa081b5af4bdbbee19 from storage" id=9dcf1c8c-7722-4b80-ba50-d554e75ee511 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:11:38 ha-579393 crio[778]: time="2025-10-14T20:11:38.482489889Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-ha-579393_kube-system_06869f9c490a5fffb50940ac23939d18_0" id=9dcf1c8c-7722-4b80-ba50-d554e75ee511 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:11:39 ha-579393 crio[778]: time="2025-10-14T20:11:39.453790483Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=0516570a-9044-4de4-a015-bbea25c3721a name=/runtime.v1.ImageService/ImageStatus
	Oct 14 20:11:39 ha-579393 crio[778]: time="2025-10-14T20:11:39.454667561Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=49830148-2d60-4bcd-96e6-b055030550bc name=/runtime.v1.ImageService/ImageStatus
	Oct 14 20:11:39 ha-579393 crio[778]: time="2025-10-14T20:11:39.455628495Z" level=info msg="Creating container: kube-system/etcd-ha-579393/etcd" id=8b73e11b-9e80-4835-a37c-a2ba727e6962 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:11:39 ha-579393 crio[778]: time="2025-10-14T20:11:39.455883043Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:11:39 ha-579393 crio[778]: time="2025-10-14T20:11:39.459324165Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:11:39 ha-579393 crio[778]: time="2025-10-14T20:11:39.459721152Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:11:39 ha-579393 crio[778]: time="2025-10-14T20:11:39.47560663Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=8b73e11b-9e80-4835-a37c-a2ba727e6962 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:11:39 ha-579393 crio[778]: time="2025-10-14T20:11:39.476944089Z" level=info msg="createCtr: deleting container ID dcf503bc791b269f7ae60714d3c019c273d1e3ffb09b959b6b7dbd7b36493355 from idIndex" id=8b73e11b-9e80-4835-a37c-a2ba727e6962 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:11:39 ha-579393 crio[778]: time="2025-10-14T20:11:39.476992042Z" level=info msg="createCtr: removing container dcf503bc791b269f7ae60714d3c019c273d1e3ffb09b959b6b7dbd7b36493355" id=8b73e11b-9e80-4835-a37c-a2ba727e6962 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:11:39 ha-579393 crio[778]: time="2025-10-14T20:11:39.477023272Z" level=info msg="createCtr: deleting container dcf503bc791b269f7ae60714d3c019c273d1e3ffb09b959b6b7dbd7b36493355 from storage" id=8b73e11b-9e80-4835-a37c-a2ba727e6962 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:11:39 ha-579393 crio[778]: time="2025-10-14T20:11:39.478894009Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-579393_kube-system_949fee8892a6b2444a3aa0dec92a7837_0" id=8b73e11b-9e80-4835-a37c-a2ba727e6962 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:11:40 ha-579393 crio[778]: time="2025-10-14T20:11:40.4534153Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=b1453f91-0343-497b-b596-193723fdba5c name=/runtime.v1.ImageService/ImageStatus
	Oct 14 20:11:40 ha-579393 crio[778]: time="2025-10-14T20:11:40.45422154Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=cb1c0bd5-d45d-417f-860a-b8d84cb69053 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 20:11:40 ha-579393 crio[778]: time="2025-10-14T20:11:40.455093378Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-579393/kube-controller-manager" id=a4054c5c-c9d7-475e-a2d5-f7e32f4028de name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:11:40 ha-579393 crio[778]: time="2025-10-14T20:11:40.455289194Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:11:40 ha-579393 crio[778]: time="2025-10-14T20:11:40.458514522Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:11:40 ha-579393 crio[778]: time="2025-10-14T20:11:40.459115909Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:11:40 ha-579393 crio[778]: time="2025-10-14T20:11:40.475811398Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=a4054c5c-c9d7-475e-a2d5-f7e32f4028de name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:11:40 ha-579393 crio[778]: time="2025-10-14T20:11:40.477327243Z" level=info msg="createCtr: deleting container ID c1204206abb720dcb719a6c233a98bcbb390623c2f9c9df455810c5183c2ad7c from idIndex" id=a4054c5c-c9d7-475e-a2d5-f7e32f4028de name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:11:40 ha-579393 crio[778]: time="2025-10-14T20:11:40.477367769Z" level=info msg="createCtr: removing container c1204206abb720dcb719a6c233a98bcbb390623c2f9c9df455810c5183c2ad7c" id=a4054c5c-c9d7-475e-a2d5-f7e32f4028de name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:11:40 ha-579393 crio[778]: time="2025-10-14T20:11:40.47740108Z" level=info msg="createCtr: deleting container c1204206abb720dcb719a6c233a98bcbb390623c2f9c9df455810c5183c2ad7c from storage" id=a4054c5c-c9d7-475e-a2d5-f7e32f4028de name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:11:40 ha-579393 crio[778]: time="2025-10-14T20:11:40.479506125Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-579393_kube-system_514451ea1eb9e52e24cc36daace2ea4a_0" id=a4054c5c-c9d7-475e-a2d5-f7e32f4028de name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 20:11:42.129202    2730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:11:42.129711    2730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:11:42.131357    2730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:11:42.131849    2730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:11:42.133467    2730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 91 42 a8 46 f5 08 06
	[ +33.952077] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff aa 17 60 38 7c 63 08 06
	[  +0.961658] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ba 77 81 98 05 a1 08 06
	[  +0.043334] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 8a 2f 57 55 4c d7 08 06
	[  +4.681081] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f6 ac 51 a4 b2 5a 08 06
	[Oct14 19:04] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e6 d1 de e1 85 25 08 06
	[  +0.899784] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ae f2 fe 51 3d 95 08 06
	[  +0.040749] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 82 c6 36 89 b8 4a 08 06
	[  +4.162603] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c6 62 d5 5c 0e ef 08 06
	[ +30.801371] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 06 ae e5 28 c7 39 08 06
	[  +0.817897] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ae 5e 5e 90 62 1a 08 06
	[  +0.046959] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 96 8f 1b bd b9 9f 08 06
	[Oct14 19:05] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 56 59 c3 7c db c4 08 06
	
	
	==> kernel <==
	 20:11:42 up  2:54,  0 user,  load average: 0.08, 0.06, 0.54
	Linux ha-579393 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 14 20:11:38 ha-579393 kubelet[1963]: E1014 20:11:38.452740    1963 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-579393\" not found" node="ha-579393"
	Oct 14 20:11:38 ha-579393 kubelet[1963]: E1014 20:11:38.482840    1963 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 14 20:11:38 ha-579393 kubelet[1963]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 14 20:11:38 ha-579393 kubelet[1963]:  > podSandboxID="26eabc9a05c338cff1ebd4ea1b580692dcb1accc6b0e23f61f6a228d1f73adce"
	Oct 14 20:11:38 ha-579393 kubelet[1963]: E1014 20:11:38.482961    1963 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 14 20:11:38 ha-579393 kubelet[1963]:         container kube-apiserver start failed in pod kube-apiserver-ha-579393_kube-system(06869f9c490a5fffb50940ac23939d18): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 14 20:11:38 ha-579393 kubelet[1963]:  > logger="UnhandledError"
	Oct 14 20:11:38 ha-579393 kubelet[1963]: E1014 20:11:38.482992    1963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-ha-579393" podUID="06869f9c490a5fffb50940ac23939d18"
	Oct 14 20:11:39 ha-579393 kubelet[1963]: E1014 20:11:39.453306    1963 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-579393\" not found" node="ha-579393"
	Oct 14 20:11:39 ha-579393 kubelet[1963]: E1014 20:11:39.479208    1963 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 14 20:11:39 ha-579393 kubelet[1963]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 14 20:11:39 ha-579393 kubelet[1963]:  > podSandboxID="41ac2f349da00920582806a729366af02d901203fe089532947fdee2d8b61fa0"
	Oct 14 20:11:39 ha-579393 kubelet[1963]: E1014 20:11:39.479317    1963 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 14 20:11:39 ha-579393 kubelet[1963]:         container etcd start failed in pod etcd-ha-579393_kube-system(949fee8892a6b2444a3aa0dec92a7837): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 14 20:11:39 ha-579393 kubelet[1963]:  > logger="UnhandledError"
	Oct 14 20:11:39 ha-579393 kubelet[1963]: E1014 20:11:39.479349    1963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-ha-579393" podUID="949fee8892a6b2444a3aa0dec92a7837"
	Oct 14 20:11:40 ha-579393 kubelet[1963]: E1014 20:11:40.453066    1963 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-579393\" not found" node="ha-579393"
	Oct 14 20:11:40 ha-579393 kubelet[1963]: E1014 20:11:40.468270    1963 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-579393\" not found"
	Oct 14 20:11:40 ha-579393 kubelet[1963]: E1014 20:11:40.479821    1963 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 14 20:11:40 ha-579393 kubelet[1963]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 14 20:11:40 ha-579393 kubelet[1963]:  > podSandboxID="aaede030549f8967d5aa233537563148ce2bbd3af1fde92787bd937fe5f1c93d"
	Oct 14 20:11:40 ha-579393 kubelet[1963]: E1014 20:11:40.479956    1963 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 14 20:11:40 ha-579393 kubelet[1963]:         container kube-controller-manager start failed in pod kube-controller-manager-ha-579393_kube-system(514451ea1eb9e52e24cc36daace2ea4a): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 14 20:11:40 ha-579393 kubelet[1963]:  > logger="UnhandledError"
	Oct 14 20:11:40 ha-579393 kubelet[1963]: E1014 20:11:40.479991    1963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-579393" podUID="514451ea1eb9e52e24cc36daace2ea4a"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-579393 -n ha-579393
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-579393 -n ha-579393: exit status 6 (302.410019ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1014 20:11:42.525913  476061 status.go:458] kubeconfig endpoint: get endpoint: "ha-579393" does not appear in /home/jenkins/minikube-integration/21409-413763/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "ha-579393" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StartCluster (505.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (79.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-579393 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:128: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-579393 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml: exit status 1 (95.813049ms)

                                                
                                                
** stderr ** 
	error: cluster "ha-579393" does not exist

                                                
                                                
** /stderr **
ha_test.go:130: failed to create busybox deployment to ha (multi-control plane) cluster
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-579393 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-579393 kubectl -- rollout status deployment/busybox: exit status 1 (96.105438ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-579393"

                                                
                                                
** /stderr **
ha_test.go:135: failed to deploy busybox to ha (multi-control plane) cluster
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (92.932521ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-579393"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1014 20:11:42.827605  417373 retry.go:31] will retry after 969.618702ms: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (97.378177ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-579393"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1014 20:11:43.895357  417373 retry.go:31] will retry after 2.038590566s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (94.574223ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-579393"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1014 20:11:46.030308  417373 retry.go:31] will retry after 1.788759892s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (93.889862ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-579393"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1014 20:11:47.914362  417373 retry.go:31] will retry after 4.33193113s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (94.131029ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-579393"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1014 20:11:52.344139  417373 retry.go:31] will retry after 3.692103294s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (94.553466ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-579393"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1014 20:11:56.133606  417373 retry.go:31] will retry after 6.900920396s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (94.387583ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-579393"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1014 20:12:03.130237  417373 retry.go:31] will retry after 9.386162659s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (93.829726ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-579393"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1014 20:12:12.619832  417373 retry.go:31] will retry after 18.273167387s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (98.470106ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-579393"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1014 20:12:30.998615  417373 retry.go:31] will retry after 29.253367212s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (97.49084ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-579393"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:159: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-579393 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:163: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-579393 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (94.494403ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-579393"

                                                
                                                
** /stderr **
ha_test.go:165: failed get Pod names
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-579393 kubectl -- exec  -- nslookup kubernetes.io
ha_test.go:171: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-579393 kubectl -- exec  -- nslookup kubernetes.io: exit status 1 (95.49972ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-579393"

                                                
                                                
** /stderr **
ha_test.go:173: Pod  could not resolve 'kubernetes.io': exit status 1
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-579393 kubectl -- exec  -- nslookup kubernetes.default
ha_test.go:181: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-579393 kubectl -- exec  -- nslookup kubernetes.default: exit status 1 (95.535193ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-579393"

                                                
                                                
** /stderr **
ha_test.go:183: Pod  could not resolve 'kubernetes.default': exit status 1
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-579393 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-579393 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (93.333689ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-579393"

                                                
                                                
** /stderr **
ha_test.go:191: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-579393
helpers_test.go:243: (dbg) docker inspect ha-579393:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2",
	        "Created": "2025-10-14T20:03:22.416166993Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 471115,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-14T20:03:22.453114626Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2/hostname",
	        "HostsPath": "/var/lib/docker/containers/e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2/hosts",
	        "LogPath": "/var/lib/docker/containers/e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2/e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2-json.log",
	        "Name": "/ha-579393",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-579393:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-579393",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2",
	                "LowerDir": "/var/lib/docker/overlay2/f2862f7f555e5163c5fb738e59e646f87e5b6f6dc38621bbd5cf8a3ccf2dc40d-init/diff:/var/lib/docker/overlay2/51cca15559205c73df9571f03495ed8a29f085405673af5fef2c1ba7c695d8af/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f2862f7f555e5163c5fb738e59e646f87e5b6f6dc38621bbd5cf8a3ccf2dc40d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f2862f7f555e5163c5fb738e59e646f87e5b6f6dc38621bbd5cf8a3ccf2dc40d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f2862f7f555e5163c5fb738e59e646f87e5b6f6dc38621bbd5cf8a3ccf2dc40d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-579393",
	                "Source": "/var/lib/docker/volumes/ha-579393/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-579393",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-579393",
	                "name.minikube.sigs.k8s.io": "ha-579393",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a1c2b52fae2ff440ed705eb97dd81ba6bb6415972c195c1ca3bec92d8e7f50f0",
	            "SandboxKey": "/var/run/docker/netns/a1c2b52fae2f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32903"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32904"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32907"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32905"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32906"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-579393": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "92:ce:80:cd:a9:2b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d686e445c44d566d468791372c74214f3cfc87adaf2fbedd68822ff7d3ce8955",
	                    "EndpointID": "9e96e7d478fc0073b7c8e78f8945763db207596a9030627a1780b04c90be2b93",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-579393",
	                        "e999454f4483"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-579393 -n ha-579393
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-579393 -n ha-579393: exit status 6 (307.662749ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1014 20:13:01.046594  476930 status.go:458] kubeconfig endpoint: get endpoint: "ha-579393" does not appear in /home/jenkins/minikube-integration/21409-413763/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-579393 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                      ARGS                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-744288 image ls --format yaml --alsologtostderr                                                      │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ ssh            │ functional-744288 ssh pgrep buildkitd                                                                           │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │                     │
	│ update-context │ functional-744288 update-context --alsologtostderr -v=2                                                         │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ update-context │ functional-744288 update-context --alsologtostderr -v=2                                                         │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ image          │ functional-744288 image build -t localhost/my-image:functional-744288 testdata/build --alsologtostderr          │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ update-context │ functional-744288 update-context --alsologtostderr -v=2                                                         │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ image          │ functional-744288 image ls                                                                                      │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ delete         │ -p functional-744288                                                                                            │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 20:03 UTC │ 14 Oct 25 20:03 UTC │
	│ start          │ ha-579393 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:03 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml                                                │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- rollout status deployment/busybox                                                          │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:12 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:12 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:12 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- exec  -- nslookup kubernetes.io                                                            │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- exec  -- nslookup kubernetes.default                                                       │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                                     │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/14 20:03:17
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1014 20:03:17.125360  470544 out.go:360] Setting OutFile to fd 1 ...
	I1014 20:03:17.125666  470544 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:03:17.125678  470544 out.go:374] Setting ErrFile to fd 2...
	I1014 20:03:17.125685  470544 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:03:17.125940  470544 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-413763/.minikube/bin
	I1014 20:03:17.126490  470544 out.go:368] Setting JSON to false
	I1014 20:03:17.127467  470544 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":9943,"bootTime":1760462254,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1014 20:03:17.127588  470544 start.go:141] virtualization: kvm guest
	I1014 20:03:17.129767  470544 out.go:179] * [ha-579393] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1014 20:03:17.131241  470544 out.go:179]   - MINIKUBE_LOCATION=21409
	I1014 20:03:17.131264  470544 notify.go:220] Checking for updates...
	I1014 20:03:17.134306  470544 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 20:03:17.135806  470544 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-413763/kubeconfig
	I1014 20:03:17.137119  470544 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-413763/.minikube
	I1014 20:03:17.138379  470544 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1014 20:03:17.140082  470544 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 20:03:17.141662  470544 driver.go:421] Setting default libvirt URI to qemu:///system
	I1014 20:03:17.165916  470544 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1014 20:03:17.166098  470544 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 20:03:17.229548  470544 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-14 20:03:17.218250431 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1014 20:03:17.229650  470544 docker.go:318] overlay module found
	I1014 20:03:17.231449  470544 out.go:179] * Using the docker driver based on user configuration
	I1014 20:03:17.232741  470544 start.go:305] selected driver: docker
	I1014 20:03:17.232773  470544 start.go:925] validating driver "docker" against <nil>
	I1014 20:03:17.232790  470544 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 20:03:17.233313  470544 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 20:03:17.295257  470544 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-14 20:03:17.284941769 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1014 20:03:17.295445  470544 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1014 20:03:17.295657  470544 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 20:03:17.297506  470544 out.go:179] * Using Docker driver with root privileges
	I1014 20:03:17.298873  470544 cni.go:84] Creating CNI manager for ""
	I1014 20:03:17.298932  470544 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1014 20:03:17.298947  470544 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1014 20:03:17.299040  470544 start.go:349] cluster config:
	{Name:ha-579393 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-579393 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPaus
eInterval:1m0s}
	I1014 20:03:17.300487  470544 out.go:179] * Starting "ha-579393" primary control-plane node in "ha-579393" cluster
	I1014 20:03:17.301710  470544 cache.go:123] Beginning downloading kic base image for docker with crio
	I1014 20:03:17.302965  470544 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1014 20:03:17.304134  470544 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 20:03:17.304173  470544 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-413763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1014 20:03:17.304183  470544 cache.go:58] Caching tarball of preloaded images
	I1014 20:03:17.304233  470544 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1014 20:03:17.304269  470544 preload.go:233] Found /home/jenkins/minikube-integration/21409-413763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1014 20:03:17.304279  470544 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1014 20:03:17.304557  470544 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/config.json ...
	I1014 20:03:17.304580  470544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/config.json: {Name:mk533f81ade9d1a5f526dccc10d22b964ab1abab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:03:17.326336  470544 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1014 20:03:17.326357  470544 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1014 20:03:17.326374  470544 cache.go:232] Successfully downloaded all kic artifacts
	I1014 20:03:17.326399  470544 start.go:360] acquireMachinesLock for ha-579393: {Name:mk1f05a584fb3418ecbc78031a467c0e06c7a892 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 20:03:17.327173  470544 start.go:364] duration metric: took 757.56µs to acquireMachinesLock for "ha-579393"
	I1014 20:03:17.327207  470544 start.go:93] Provisioning new machine with config: &{Name:ha-579393 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-579393 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 20:03:17.327266  470544 start.go:125] createHost starting for "" (driver="docker")
	I1014 20:03:17.329132  470544 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1014 20:03:17.329332  470544 start.go:159] libmachine.API.Create for "ha-579393" (driver="docker")
	I1014 20:03:17.329358  470544 client.go:168] LocalClient.Create starting
	I1014 20:03:17.329426  470544 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem
	I1014 20:03:17.329458  470544 main.go:141] libmachine: Decoding PEM data...
	I1014 20:03:17.329469  470544 main.go:141] libmachine: Parsing certificate...
	I1014 20:03:17.329531  470544 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem
	I1014 20:03:17.329556  470544 main.go:141] libmachine: Decoding PEM data...
	I1014 20:03:17.329563  470544 main.go:141] libmachine: Parsing certificate...
	I1014 20:03:17.329904  470544 cli_runner.go:164] Run: docker network inspect ha-579393 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1014 20:03:17.347467  470544 cli_runner.go:211] docker network inspect ha-579393 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1014 20:03:17.347535  470544 network_create.go:284] running [docker network inspect ha-579393] to gather additional debugging logs...
	I1014 20:03:17.347555  470544 cli_runner.go:164] Run: docker network inspect ha-579393
	W1014 20:03:17.364018  470544 cli_runner.go:211] docker network inspect ha-579393 returned with exit code 1
	I1014 20:03:17.364049  470544 network_create.go:287] error running [docker network inspect ha-579393]: docker network inspect ha-579393: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-579393 not found
	I1014 20:03:17.364062  470544 network_create.go:289] output of [docker network inspect ha-579393]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-579393 not found
	
	** /stderr **
	I1014 20:03:17.364179  470544 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1014 20:03:17.381335  470544 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001946000}
	I1014 20:03:17.381374  470544 network_create.go:124] attempt to create docker network ha-579393 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1014 20:03:17.381422  470544 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-579393 ha-579393
	I1014 20:03:17.438306  470544 network_create.go:108] docker network ha-579393 192.168.49.0/24 created
	I1014 20:03:17.438342  470544 kic.go:121] calculated static IP "192.168.49.2" for the "ha-579393" container
	I1014 20:03:17.438422  470544 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1014 20:03:17.455388  470544 cli_runner.go:164] Run: docker volume create ha-579393 --label name.minikube.sigs.k8s.io=ha-579393 --label created_by.minikube.sigs.k8s.io=true
	I1014 20:03:17.474494  470544 oci.go:103] Successfully created a docker volume ha-579393
	I1014 20:03:17.474585  470544 cli_runner.go:164] Run: docker run --rm --name ha-579393-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-579393 --entrypoint /usr/bin/test -v ha-579393:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1014 20:03:17.868197  470544 oci.go:107] Successfully prepared a docker volume ha-579393
	I1014 20:03:17.868264  470544 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 20:03:17.868291  470544 kic.go:194] Starting extracting preloaded images to volume ...
	I1014 20:03:17.868380  470544 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21409-413763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-579393:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	I1014 20:03:22.341626  470544 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21409-413763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-579393:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (4.473193247s)
	I1014 20:03:22.341663  470544 kic.go:203] duration metric: took 4.47336734s to extract preloaded images to volume ...
	W1014 20:03:22.341815  470544 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1014 20:03:22.341863  470544 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1014 20:03:22.341916  470544 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1014 20:03:22.400050  470544 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-579393 --name ha-579393 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-579393 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-579393 --network ha-579393 --ip 192.168.49.2 --volume ha-579393:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1014 20:03:22.677726  470544 cli_runner.go:164] Run: docker container inspect ha-579393 --format={{.State.Running}}
	I1014 20:03:22.696026  470544 cli_runner.go:164] Run: docker container inspect ha-579393 --format={{.State.Status}}
	I1014 20:03:22.715378  470544 cli_runner.go:164] Run: docker exec ha-579393 stat /var/lib/dpkg/alternatives/iptables
	I1014 20:03:22.762223  470544 oci.go:144] the created container "ha-579393" has a running status.
	I1014 20:03:22.762255  470544 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa...
	I1014 20:03:22.820780  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1014 20:03:22.820832  470544 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1014 20:03:22.850515  470544 cli_runner.go:164] Run: docker container inspect ha-579393 --format={{.State.Status}}
	I1014 20:03:22.870190  470544 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1014 20:03:22.870210  470544 kic_runner.go:114] Args: [docker exec --privileged ha-579393 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1014 20:03:22.912447  470544 cli_runner.go:164] Run: docker container inspect ha-579393 --format={{.State.Status}}
	I1014 20:03:22.934356  470544 machine.go:93] provisionDockerMachine start ...
	I1014 20:03:22.934472  470544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:03:22.954394  470544 main.go:141] libmachine: Using SSH client type: native
	I1014 20:03:22.954768  470544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32903 <nil> <nil>}
	I1014 20:03:22.954796  470544 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 20:03:22.955439  470544 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50642->127.0.0.1:32903: read: connection reset by peer
	I1014 20:03:26.104260  470544 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-579393
	
	I1014 20:03:26.104298  470544 ubuntu.go:182] provisioning hostname "ha-579393"
	I1014 20:03:26.104379  470544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:03:26.122921  470544 main.go:141] libmachine: Using SSH client type: native
	I1014 20:03:26.123167  470544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32903 <nil> <nil>}
	I1014 20:03:26.123185  470544 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-579393 && echo "ha-579393" | sudo tee /etc/hostname
	I1014 20:03:26.281180  470544 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-579393
	
	I1014 20:03:26.281286  470544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:03:26.299367  470544 main.go:141] libmachine: Using SSH client type: native
	I1014 20:03:26.299579  470544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32903 <nil> <nil>}
	I1014 20:03:26.299596  470544 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-579393' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-579393/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-579393' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 20:03:26.445909  470544 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 20:03:26.445941  470544 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-413763/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-413763/.minikube}
	I1014 20:03:26.445960  470544 ubuntu.go:190] setting up certificates
	I1014 20:03:26.445974  470544 provision.go:84] configureAuth start
	I1014 20:03:26.446042  470544 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-579393
	I1014 20:03:26.463014  470544 provision.go:143] copyHostCerts
	I1014 20:03:26.463059  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem
	I1014 20:03:26.463090  470544 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem, removing ...
	I1014 20:03:26.463099  470544 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem
	I1014 20:03:26.463169  470544 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem (1078 bytes)
	I1014 20:03:26.463255  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem
	I1014 20:03:26.463272  470544 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem, removing ...
	I1014 20:03:26.463279  470544 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem
	I1014 20:03:26.463304  470544 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem (1123 bytes)
	I1014 20:03:26.463350  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem
	I1014 20:03:26.463367  470544 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem, removing ...
	I1014 20:03:26.463373  470544 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem
	I1014 20:03:26.463396  470544 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem (1675 bytes)
	I1014 20:03:26.463447  470544 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca-key.pem org=jenkins.ha-579393 san=[127.0.0.1 192.168.49.2 ha-579393 localhost minikube]
	I1014 20:03:26.617910  470544 provision.go:177] copyRemoteCerts
	I1014 20:03:26.617976  470544 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 20:03:26.618022  470544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:03:26.636120  470544 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:03:26.739380  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1014 20:03:26.739452  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 20:03:26.759232  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1014 20:03:26.759293  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1014 20:03:26.778271  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1014 20:03:26.778338  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1014 20:03:26.796388  470544 provision.go:87] duration metric: took 350.39932ms to configureAuth
	I1014 20:03:26.796420  470544 ubuntu.go:206] setting minikube options for container-runtime
	I1014 20:03:26.796596  470544 config.go:182] Loaded profile config "ha-579393": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:03:26.796705  470544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:03:26.816035  470544 main.go:141] libmachine: Using SSH client type: native
	I1014 20:03:26.816243  470544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32903 <nil> <nil>}
	I1014 20:03:26.816259  470544 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 20:03:27.082126  470544 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 20:03:27.082156  470544 machine.go:96] duration metric: took 4.147772563s to provisionDockerMachine
	I1014 20:03:27.082171  470544 client.go:171] duration metric: took 9.752806403s to LocalClient.Create
	I1014 20:03:27.082197  470544 start.go:167] duration metric: took 9.752866506s to libmachine.API.Create "ha-579393"
	I1014 20:03:27.082205  470544 start.go:293] postStartSetup for "ha-579393" (driver="docker")
	I1014 20:03:27.082215  470544 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 20:03:27.082274  470544 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 20:03:27.082316  470544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:03:27.101460  470544 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:03:27.208078  470544 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 20:03:27.212053  470544 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1014 20:03:27.212086  470544 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1014 20:03:27.212100  470544 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-413763/.minikube/addons for local assets ...
	I1014 20:03:27.212182  470544 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-413763/.minikube/files for local assets ...
	I1014 20:03:27.212277  470544 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem -> 4173732.pem in /etc/ssl/certs
	I1014 20:03:27.212288  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem -> /etc/ssl/certs/4173732.pem
	I1014 20:03:27.212396  470544 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 20:03:27.220472  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem --> /etc/ssl/certs/4173732.pem (1708 bytes)
	I1014 20:03:27.241576  470544 start.go:296] duration metric: took 159.355524ms for postStartSetup
	I1014 20:03:27.241976  470544 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-579393
	I1014 20:03:27.259468  470544 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/config.json ...
	I1014 20:03:27.259849  470544 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1014 20:03:27.259907  470544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:03:27.277799  470544 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:03:27.378323  470544 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1014 20:03:27.383519  470544 start.go:128] duration metric: took 10.056234444s to createHost
	I1014 20:03:27.383548  470544 start.go:83] releasing machines lock for "ha-579393", held for 10.056356237s
	I1014 20:03:27.383629  470544 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-579393
	I1014 20:03:27.401699  470544 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 20:03:27.401709  470544 ssh_runner.go:195] Run: cat /version.json
	I1014 20:03:27.401815  470544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:03:27.401838  470544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:03:27.420176  470544 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:03:27.421057  470544 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:03:27.574708  470544 ssh_runner.go:195] Run: systemctl --version
	I1014 20:03:27.581776  470544 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 20:03:27.618049  470544 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 20:03:27.622981  470544 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 20:03:27.623059  470544 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 20:03:27.650696  470544 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1014 20:03:27.650726  470544 start.go:495] detecting cgroup driver to use...
	I1014 20:03:27.650795  470544 detect.go:190] detected "systemd" cgroup driver on host os
	I1014 20:03:27.650860  470544 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 20:03:27.668397  470544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 20:03:27.681391  470544 docker.go:218] disabling cri-docker service (if available) ...
	I1014 20:03:27.681446  470544 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 20:03:27.698246  470544 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 20:03:27.716479  470544 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 20:03:27.798818  470544 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 20:03:27.884317  470544 docker.go:234] disabling docker service ...
	I1014 20:03:27.884384  470544 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 20:03:27.905126  470544 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 20:03:27.918827  470544 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 20:03:28.002081  470544 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 20:03:28.084842  470544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 20:03:28.098220  470544 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 20:03:28.113305  470544 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1014 20:03:28.113364  470544 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:03:28.124477  470544 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1014 20:03:28.124559  470544 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:03:28.134261  470544 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:03:28.144071  470544 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:03:28.154359  470544 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 20:03:28.163636  470544 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:03:28.173644  470544 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:03:28.188326  470544 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:03:28.198228  470544 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 20:03:28.206234  470544 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 20:03:28.214019  470544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 20:03:28.295010  470544 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 20:03:28.401206  470544 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 20:03:28.401272  470544 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 20:03:28.405522  470544 start.go:563] Will wait 60s for crictl version
	I1014 20:03:28.405585  470544 ssh_runner.go:195] Run: which crictl
	I1014 20:03:28.409373  470544 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1014 20:03:28.435266  470544 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1014 20:03:28.435335  470544 ssh_runner.go:195] Run: crio --version
	I1014 20:03:28.465834  470544 ssh_runner.go:195] Run: crio --version
	I1014 20:03:28.497274  470544 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1014 20:03:28.498593  470544 cli_runner.go:164] Run: docker network inspect ha-579393 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1014 20:03:28.517029  470544 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1014 20:03:28.521498  470544 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 20:03:28.532817  470544 kubeadm.go:883] updating cluster {Name:ha-579393 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-579393 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 20:03:28.532940  470544 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 20:03:28.532992  470544 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 20:03:28.565925  470544 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 20:03:28.565951  470544 crio.go:433] Images already preloaded, skipping extraction
	I1014 20:03:28.566006  470544 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 20:03:28.592978  470544 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 20:03:28.593003  470544 cache_images.go:85] Images are preloaded, skipping loading
	I1014 20:03:28.593011  470544 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1014 20:03:28.593109  470544 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-579393 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-579393 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 20:03:28.593172  470544 ssh_runner.go:195] Run: crio config
	I1014 20:03:28.638570  470544 cni.go:84] Creating CNI manager for ""
	I1014 20:03:28.638590  470544 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1014 20:03:28.638604  470544 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1014 20:03:28.638626  470544 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-579393 NodeName:ha-579393 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1014 20:03:28.638736  470544 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-579393"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 20:03:28.638778  470544 kube-vip.go:115] generating kube-vip config ...
	I1014 20:03:28.638827  470544 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1014 20:03:28.651221  470544 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1014 20:03:28.651322  470544 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1014 20:03:28.651371  470544 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1014 20:03:28.659733  470544 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 20:03:28.659825  470544 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1014 20:03:28.667977  470544 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1014 20:03:28.681172  470544 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 20:03:28.697239  470544 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1014 20:03:28.710080  470544 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I1014 20:03:28.724688  470544 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1014 20:03:28.728568  470544 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 20:03:28.738656  470544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 20:03:28.817749  470544 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 20:03:28.841528  470544 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393 for IP: 192.168.49.2
	I1014 20:03:28.841566  470544 certs.go:195] generating shared ca certs ...
	I1014 20:03:28.841587  470544 certs.go:227] acquiring lock for ca certs: {Name:mk332cc83f1e1747534fc81e896bed94f24941d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:03:28.841727  470544 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.key
	I1014 20:03:28.841805  470544 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.key
	I1014 20:03:28.841821  470544 certs.go:257] generating profile certs ...
	I1014 20:03:28.841874  470544 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/client.key
	I1014 20:03:28.841897  470544 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/client.crt with IP's: []
	I1014 20:03:29.018063  470544 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/client.crt ...
	I1014 20:03:29.018101  470544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/client.crt: {Name:mk8b90bc05b294b6c05e808012d45472c3093f67 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:03:29.018299  470544 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/client.key ...
	I1014 20:03:29.018321  470544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/client.key: {Name:mk4670db425ebf46f3bf4968573343a975480683 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:03:29.018407  470544 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key.6d4a1612
	I1014 20:03:29.018424  470544 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt.6d4a1612 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I1014 20:03:29.208082  470544 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt.6d4a1612 ...
	I1014 20:03:29.208118  470544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt.6d4a1612: {Name:mk2e48e06bd7a0fd2aa3ea9def795ac03bded956 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:03:29.208287  470544 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key.6d4a1612 ...
	I1014 20:03:29.208300  470544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key.6d4a1612: {Name:mkc6fe9b4a3330b4fa61a71beeb137e948294421 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:03:29.209199  470544 certs.go:382] copying /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt.6d4a1612 -> /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt
	I1014 20:03:29.209315  470544 certs.go:386] copying /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key.6d4a1612 -> /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key
	I1014 20:03:29.209373  470544 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.key
	I1014 20:03:29.209389  470544 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.crt with IP's: []
	I1014 20:03:29.349734  470544 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.crt ...
	I1014 20:03:29.349788  470544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.crt: {Name:mk3c38e66fa21f9bf9f031b0b611fbb1d8c4882a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:03:29.349962  470544 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.key ...
	I1014 20:03:29.349973  470544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.key: {Name:mke28e4de33c7a0d50feb0b1335c5cd9e94d81db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:03:29.350047  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1014 20:03:29.350064  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1014 20:03:29.350075  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1014 20:03:29.350087  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1014 20:03:29.350099  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1014 20:03:29.350109  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1014 20:03:29.350122  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1014 20:03:29.350132  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1014 20:03:29.350183  470544 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/417373.pem (1338 bytes)
	W1014 20:03:29.350228  470544 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-413763/.minikube/certs/417373_empty.pem, impossibly tiny 0 bytes
	I1014 20:03:29.350237  470544 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca-key.pem (1675 bytes)
	I1014 20:03:29.350258  470544 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem (1078 bytes)
	I1014 20:03:29.350280  470544 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem (1123 bytes)
	I1014 20:03:29.350300  470544 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem (1675 bytes)
	I1014 20:03:29.350336  470544 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem (1708 bytes)
	I1014 20:03:29.350360  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:03:29.350373  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/417373.pem -> /usr/share/ca-certificates/417373.pem
	I1014 20:03:29.350387  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem -> /usr/share/ca-certificates/4173732.pem
	I1014 20:03:29.350927  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 20:03:29.369482  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1014 20:03:29.386797  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 20:03:29.404413  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1014 20:03:29.421955  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1014 20:03:29.439808  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1014 20:03:29.457222  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 20:03:29.475143  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1014 20:03:29.493300  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 20:03:29.513957  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/certs/417373.pem --> /usr/share/ca-certificates/417373.pem (1338 bytes)
	I1014 20:03:29.535163  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem --> /usr/share/ca-certificates/4173732.pem (1708 bytes)
	I1014 20:03:29.554358  470544 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 20:03:29.567671  470544 ssh_runner.go:195] Run: openssl version
	I1014 20:03:29.574116  470544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4173732.pem && ln -fs /usr/share/ca-certificates/4173732.pem /etc/ssl/certs/4173732.pem"
	I1014 20:03:29.582980  470544 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4173732.pem
	I1014 20:03:29.586713  470544 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 19:32 /usr/share/ca-certificates/4173732.pem
	I1014 20:03:29.586836  470544 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4173732.pem
	I1014 20:03:29.620973  470544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4173732.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 20:03:29.629990  470544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 20:03:29.638580  470544 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:03:29.642541  470544 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 19:15 /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:03:29.642595  470544 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:03:29.677097  470544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 20:03:29.687002  470544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/417373.pem && ln -fs /usr/share/ca-certificates/417373.pem /etc/ssl/certs/417373.pem"
	I1014 20:03:29.696267  470544 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/417373.pem
	I1014 20:03:29.700535  470544 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 19:32 /usr/share/ca-certificates/417373.pem
	I1014 20:03:29.700593  470544 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/417373.pem
	I1014 20:03:29.734895  470544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/417373.pem /etc/ssl/certs/51391683.0"
	I1014 20:03:29.744295  470544 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 20:03:29.748240  470544 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1014 20:03:29.748305  470544 kubeadm.go:400] StartCluster: {Name:ha-579393 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-579393 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 20:03:29.748380  470544 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 20:03:29.748448  470544 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 20:03:29.777054  470544 cri.go:89] found id: ""
	I1014 20:03:29.777134  470544 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 20:03:29.785507  470544 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 20:03:29.793651  470544 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1014 20:03:29.793711  470544 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 20:03:29.801881  470544 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 20:03:29.801906  470544 kubeadm.go:157] found existing configuration files:
	
	I1014 20:03:29.801956  470544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 20:03:29.809948  470544 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 20:03:29.810011  470544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 20:03:29.817979  470544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 20:03:29.825985  470544 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 20:03:29.826064  470544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 20:03:29.833833  470544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 20:03:29.842078  470544 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 20:03:29.842149  470544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 20:03:29.850122  470544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 20:03:29.858250  470544 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 20:03:29.858312  470544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 20:03:29.866004  470544 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1014 20:03:29.905901  470544 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1014 20:03:29.906013  470544 kubeadm.go:318] [preflight] Running pre-flight checks
	I1014 20:03:29.928412  470544 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1014 20:03:29.928498  470544 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1014 20:03:29.928541  470544 kubeadm.go:318] OS: Linux
	I1014 20:03:29.928583  470544 kubeadm.go:318] CGROUPS_CPU: enabled
	I1014 20:03:29.928652  470544 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1014 20:03:29.928730  470544 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1014 20:03:29.928805  470544 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1014 20:03:29.928849  470544 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1014 20:03:29.928892  470544 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1014 20:03:29.928935  470544 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1014 20:03:29.928973  470544 kubeadm.go:318] CGROUPS_IO: enabled
	I1014 20:03:29.989181  470544 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1014 20:03:29.989342  470544 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1014 20:03:29.989457  470544 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1014 20:03:29.997476  470544 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1014 20:03:30.000428  470544 out.go:252]   - Generating certificates and keys ...
	I1014 20:03:30.000531  470544 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1014 20:03:30.000656  470544 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1014 20:03:30.367367  470544 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1014 20:03:30.888441  470544 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1014 20:03:31.416284  470544 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1014 20:03:31.486302  470544 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1014 20:03:32.293304  470544 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1014 20:03:32.293457  470544 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [ha-579393 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1014 20:03:32.436942  470544 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1014 20:03:32.437134  470544 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [ha-579393 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1014 20:03:32.740861  470544 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1014 20:03:32.874202  470544 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1014 20:03:33.330864  470544 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1014 20:03:33.330961  470544 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1014 20:03:33.434687  470544 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1014 20:03:33.590351  470544 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1014 20:03:33.928031  470544 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1014 20:03:34.042691  470544 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1014 20:03:34.576186  470544 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1014 20:03:34.576637  470544 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1014 20:03:34.579016  470544 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1014 20:03:34.581440  470544 out.go:252]   - Booting up control plane ...
	I1014 20:03:34.581593  470544 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1014 20:03:34.581712  470544 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1014 20:03:34.581832  470544 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1014 20:03:34.595204  470544 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1014 20:03:34.595404  470544 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1014 20:03:34.601919  470544 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1014 20:03:34.602142  470544 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1014 20:03:34.602243  470544 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1014 20:03:34.699612  470544 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1014 20:03:34.699737  470544 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1014 20:03:35.200483  470544 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 500.970561ms
	I1014 20:03:35.205501  470544 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1014 20:03:35.205636  470544 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1014 20:03:35.205873  470544 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1014 20:03:35.205987  470544 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1014 20:07:35.206930  470544 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000676337s
	I1014 20:07:35.207172  470544 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000720088s
	I1014 20:07:35.207359  470544 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000743417s
	I1014 20:07:35.207371  470544 kubeadm.go:318] 
	I1014 20:07:35.207694  470544 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1014 20:07:35.208049  470544 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1014 20:07:35.208276  470544 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1014 20:07:35.208532  470544 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1014 20:07:35.208786  470544 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1014 20:07:35.209074  470544 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1014 20:07:35.209100  470544 kubeadm.go:318] 
	I1014 20:07:35.211976  470544 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1014 20:07:35.212145  470544 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1014 20:07:35.212706  470544 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1014 20:07:35.212843  470544 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	W1014 20:07:35.212972  470544 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-579393 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-579393 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 500.970561ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000676337s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000720088s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000743417s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1014 20:07:35.213050  470544 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1014 20:07:37.966951  470544 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.753873381s)
	I1014 20:07:37.967030  470544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 20:07:37.980538  470544 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1014 20:07:37.980613  470544 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 20:07:37.988822  470544 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 20:07:37.988844  470544 kubeadm.go:157] found existing configuration files:
	
	I1014 20:07:37.988897  470544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 20:07:37.996970  470544 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 20:07:37.997051  470544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 20:07:38.004797  470544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 20:07:38.012635  470544 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 20:07:38.012702  470544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 20:07:38.020175  470544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 20:07:38.028386  470544 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 20:07:38.028440  470544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 20:07:38.036154  470544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 20:07:38.044027  470544 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 20:07:38.044088  470544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 20:07:38.051422  470544 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1014 20:07:38.110505  470544 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1014 20:07:38.170186  470544 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1014 20:11:40.721242  470544 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1014 20:11:40.721491  470544 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1014 20:11:40.724650  470544 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1014 20:11:40.724789  470544 kubeadm.go:318] [preflight] Running pre-flight checks
	I1014 20:11:40.724937  470544 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1014 20:11:40.725018  470544 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1014 20:11:40.725068  470544 kubeadm.go:318] OS: Linux
	I1014 20:11:40.725125  470544 kubeadm.go:318] CGROUPS_CPU: enabled
	I1014 20:11:40.725181  470544 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1014 20:11:40.725248  470544 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1014 20:11:40.725310  470544 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1014 20:11:40.725365  470544 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1014 20:11:40.725423  470544 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1014 20:11:40.725473  470544 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1014 20:11:40.725534  470544 kubeadm.go:318] CGROUPS_IO: enabled
	I1014 20:11:40.725639  470544 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1014 20:11:40.725782  470544 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1014 20:11:40.725977  470544 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1014 20:11:40.726087  470544 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1014 20:11:40.728584  470544 out.go:252]   - Generating certificates and keys ...
	I1014 20:11:40.728668  470544 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1014 20:11:40.728723  470544 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1014 20:11:40.728820  470544 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1014 20:11:40.728895  470544 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1014 20:11:40.728974  470544 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1014 20:11:40.729051  470544 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1014 20:11:40.729150  470544 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1014 20:11:40.729214  470544 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1014 20:11:40.729282  470544 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1014 20:11:40.729340  470544 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1014 20:11:40.729378  470544 kubeadm.go:318] [certs] Using the existing "sa" key
	I1014 20:11:40.729422  470544 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1014 20:11:40.729466  470544 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1014 20:11:40.729531  470544 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1014 20:11:40.729604  470544 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1014 20:11:40.729710  470544 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1014 20:11:40.729805  470544 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1014 20:11:40.729913  470544 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1014 20:11:40.730020  470544 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1014 20:11:40.731279  470544 out.go:252]   - Booting up control plane ...
	I1014 20:11:40.731376  470544 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1014 20:11:40.731472  470544 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1014 20:11:40.731563  470544 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1014 20:11:40.731676  470544 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1014 20:11:40.731820  470544 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1014 20:11:40.731960  470544 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1014 20:11:40.732060  470544 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1014 20:11:40.732099  470544 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1014 20:11:40.732241  470544 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1014 20:11:40.732368  470544 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1014 20:11:40.732459  470544 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001170855s
	I1014 20:11:40.732550  470544 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1014 20:11:40.732649  470544 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1014 20:11:40.732789  470544 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1014 20:11:40.732875  470544 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1014 20:11:40.732961  470544 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000981646s
	I1014 20:11:40.733076  470544 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001242346s
	I1014 20:11:40.733142  470544 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001205382s
	I1014 20:11:40.733157  470544 kubeadm.go:318] 
	I1014 20:11:40.733272  470544 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1014 20:11:40.733349  470544 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1014 20:11:40.733417  470544 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1014 20:11:40.733491  470544 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1014 20:11:40.733553  470544 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1014 20:11:40.733641  470544 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1014 20:11:40.733684  470544 kubeadm.go:318] 
	I1014 20:11:40.733748  470544 kubeadm.go:402] duration metric: took 8m10.985445817s to StartCluster
	I1014 20:11:40.733824  470544 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 20:11:40.733881  470544 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 20:11:40.762474  470544 cri.go:89] found id: ""
	I1014 20:11:40.762524  470544 logs.go:282] 0 containers: []
	W1014 20:11:40.762538  470544 logs.go:284] No container was found matching "kube-apiserver"
	I1014 20:11:40.762545  470544 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 20:11:40.762602  470544 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 20:11:40.789961  470544 cri.go:89] found id: ""
	I1014 20:11:40.789989  470544 logs.go:282] 0 containers: []
	W1014 20:11:40.789999  470544 logs.go:284] No container was found matching "etcd"
	I1014 20:11:40.790007  470544 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 20:11:40.790062  470544 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 20:11:40.817095  470544 cri.go:89] found id: ""
	I1014 20:11:40.817128  470544 logs.go:282] 0 containers: []
	W1014 20:11:40.817141  470544 logs.go:284] No container was found matching "coredns"
	I1014 20:11:40.817148  470544 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 20:11:40.817206  470544 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 20:11:40.843942  470544 cri.go:89] found id: ""
	I1014 20:11:40.843974  470544 logs.go:282] 0 containers: []
	W1014 20:11:40.843984  470544 logs.go:284] No container was found matching "kube-scheduler"
	I1014 20:11:40.843991  470544 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 20:11:40.844054  470544 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 20:11:40.870262  470544 cri.go:89] found id: ""
	I1014 20:11:40.870289  470544 logs.go:282] 0 containers: []
	W1014 20:11:40.870299  470544 logs.go:284] No container was found matching "kube-proxy"
	I1014 20:11:40.870308  470544 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 20:11:40.870377  470544 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 20:11:40.896558  470544 cri.go:89] found id: ""
	I1014 20:11:40.896588  470544 logs.go:282] 0 containers: []
	W1014 20:11:40.896597  470544 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 20:11:40.896604  470544 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 20:11:40.896660  470544 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 20:11:40.923171  470544 cri.go:89] found id: ""
	I1014 20:11:40.923202  470544 logs.go:282] 0 containers: []
	W1014 20:11:40.923214  470544 logs.go:284] No container was found matching "kindnet"
	I1014 20:11:40.923225  470544 logs.go:123] Gathering logs for kubelet ...
	I1014 20:11:40.923237  470544 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 20:11:40.991897  470544 logs.go:123] Gathering logs for dmesg ...
	I1014 20:11:40.991944  470544 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 20:11:41.010371  470544 logs.go:123] Gathering logs for describe nodes ...
	I1014 20:11:41.010404  470544 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 20:11:41.071387  470544 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 20:11:41.064404    2582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:11:41.064977    2582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:11:41.066171    2582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:11:41.066648    2582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:11:41.068287    2582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 20:11:41.064404    2582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:11:41.064977    2582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:11:41.066171    2582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:11:41.066648    2582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:11:41.068287    2582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 20:11:41.071407  470544 logs.go:123] Gathering logs for CRI-O ...
	I1014 20:11:41.071419  470544 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 20:11:41.133347  470544 logs.go:123] Gathering logs for container status ...
	I1014 20:11:41.133392  470544 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1014 20:11:41.166639  470544 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001170855s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000981646s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001242346s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001205382s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1014 20:11:41.166697  470544 out.go:285] * 
	W1014 20:11:41.166793  470544 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001170855s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000981646s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001242346s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001205382s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1014 20:11:41.166813  470544 out.go:285] * 
	W1014 20:11:41.168436  470544 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 20:11:41.172303  470544 out.go:203] 
	W1014 20:11:41.173765  470544 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001170855s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000981646s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001242346s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001205382s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1014 20:11:41.173801  470544 out.go:285] * 
	I1014 20:11:41.176311  470544 out.go:203] 
	
	
	==> CRI-O <==
	Oct 14 20:12:55 ha-579393 crio[778]: time="2025-10-14T20:12:55.475028797Z" level=info msg="createCtr: removing container 6a372a51d54dc4a99bd65b1bd218c7ad755d88bd6a7f29ba9f9ec88dfe7464b1" id=9554bca4-6c59-4f80-aeee-73b1126c989d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:12:55 ha-579393 crio[778]: time="2025-10-14T20:12:55.475060215Z" level=info msg="createCtr: deleting container 6a372a51d54dc4a99bd65b1bd218c7ad755d88bd6a7f29ba9f9ec88dfe7464b1 from storage" id=9554bca4-6c59-4f80-aeee-73b1126c989d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:12:55 ha-579393 crio[778]: time="2025-10-14T20:12:55.477290398Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-ha-579393_kube-system_8c15ab9dd5834e64ae44874faddf585d_0" id=9554bca4-6c59-4f80-aeee-73b1126c989d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:12:57 ha-579393 crio[778]: time="2025-10-14T20:12:57.453139497Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=25acb935-6053-4dc4-8a00-c0f31525eed4 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 20:12:57 ha-579393 crio[778]: time="2025-10-14T20:12:57.453177802Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=80fdaac0-913c-4b9e-9748-df734a5fb57f name=/runtime.v1.ImageService/ImageStatus
	Oct 14 20:12:57 ha-579393 crio[778]: time="2025-10-14T20:12:57.453990016Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=81315333-2347-4ad7-a446-80adee999f7b name=/runtime.v1.ImageService/ImageStatus
	Oct 14 20:12:57 ha-579393 crio[778]: time="2025-10-14T20:12:57.454064007Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=2ce9826b-3e65-476d-b952-d7fea30fa2e0 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 20:12:57 ha-579393 crio[778]: time="2025-10-14T20:12:57.454968172Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-579393/kube-controller-manager" id=ba9a3535-5c46-4278-8b81-cf99a5fce5ee name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:12:57 ha-579393 crio[778]: time="2025-10-14T20:12:57.454968174Z" level=info msg="Creating container: kube-system/kube-apiserver-ha-579393/kube-apiserver" id=0e15fee7-fb21-4c78-b7c1-f4cdd6b3ab37 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:12:57 ha-579393 crio[778]: time="2025-10-14T20:12:57.455263729Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:12:57 ha-579393 crio[778]: time="2025-10-14T20:12:57.455263857Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:12:57 ha-579393 crio[778]: time="2025-10-14T20:12:57.460303725Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:12:57 ha-579393 crio[778]: time="2025-10-14T20:12:57.460785139Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:12:57 ha-579393 crio[778]: time="2025-10-14T20:12:57.461766752Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:12:57 ha-579393 crio[778]: time="2025-10-14T20:12:57.462288762Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:12:57 ha-579393 crio[778]: time="2025-10-14T20:12:57.48128067Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=ba9a3535-5c46-4278-8b81-cf99a5fce5ee name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:12:57 ha-579393 crio[778]: time="2025-10-14T20:12:57.48192273Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=0e15fee7-fb21-4c78-b7c1-f4cdd6b3ab37 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:12:57 ha-579393 crio[778]: time="2025-10-14T20:12:57.482937984Z" level=info msg="createCtr: deleting container ID 13267deb92140ae09a619e9f9948fd9715254273819dbe61130eaa9df39a2123 from idIndex" id=ba9a3535-5c46-4278-8b81-cf99a5fce5ee name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:12:57 ha-579393 crio[778]: time="2025-10-14T20:12:57.482979701Z" level=info msg="createCtr: removing container 13267deb92140ae09a619e9f9948fd9715254273819dbe61130eaa9df39a2123" id=ba9a3535-5c46-4278-8b81-cf99a5fce5ee name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:12:57 ha-579393 crio[778]: time="2025-10-14T20:12:57.483024499Z" level=info msg="createCtr: deleting container 13267deb92140ae09a619e9f9948fd9715254273819dbe61130eaa9df39a2123 from storage" id=ba9a3535-5c46-4278-8b81-cf99a5fce5ee name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:12:57 ha-579393 crio[778]: time="2025-10-14T20:12:57.48342172Z" level=info msg="createCtr: deleting container ID 1de87d6b72a75dded6420210657397b777ace5cdc49362907789517ac1f87bd8 from idIndex" id=0e15fee7-fb21-4c78-b7c1-f4cdd6b3ab37 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:12:57 ha-579393 crio[778]: time="2025-10-14T20:12:57.483468415Z" level=info msg="createCtr: removing container 1de87d6b72a75dded6420210657397b777ace5cdc49362907789517ac1f87bd8" id=0e15fee7-fb21-4c78-b7c1-f4cdd6b3ab37 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:12:57 ha-579393 crio[778]: time="2025-10-14T20:12:57.48351067Z" level=info msg="createCtr: deleting container 1de87d6b72a75dded6420210657397b777ace5cdc49362907789517ac1f87bd8 from storage" id=0e15fee7-fb21-4c78-b7c1-f4cdd6b3ab37 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:12:57 ha-579393 crio[778]: time="2025-10-14T20:12:57.487098353Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-579393_kube-system_514451ea1eb9e52e24cc36daace2ea4a_0" id=ba9a3535-5c46-4278-8b81-cf99a5fce5ee name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:12:57 ha-579393 crio[778]: time="2025-10-14T20:12:57.487389405Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-ha-579393_kube-system_06869f9c490a5fffb50940ac23939d18_0" id=0e15fee7-fb21-4c78-b7c1-f4cdd6b3ab37 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 20:13:01.655723    3026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:13:01.656359    3026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:13:01.658099    3026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:13:01.658642    3026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:13:01.660252    3026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 91 42 a8 46 f5 08 06
	[ +33.952077] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff aa 17 60 38 7c 63 08 06
	[  +0.961658] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ba 77 81 98 05 a1 08 06
	[  +0.043334] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 8a 2f 57 55 4c d7 08 06
	[  +4.681081] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f6 ac 51 a4 b2 5a 08 06
	[Oct14 19:04] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e6 d1 de e1 85 25 08 06
	[  +0.899784] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ae f2 fe 51 3d 95 08 06
	[  +0.040749] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 82 c6 36 89 b8 4a 08 06
	[  +4.162603] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c6 62 d5 5c 0e ef 08 06
	[ +30.801371] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 06 ae e5 28 c7 39 08 06
	[  +0.817897] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ae 5e 5e 90 62 1a 08 06
	[  +0.046959] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 96 8f 1b bd b9 9f 08 06
	[Oct14 19:05] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 56 59 c3 7c db c4 08 06
	
	
	==> kernel <==
	 20:13:01 up  2:55,  0 user,  load average: 0.08, 0.06, 0.50
	Linux ha-579393 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 14 20:12:55 ha-579393 kubelet[1963]:  > logger="UnhandledError"
	Oct 14 20:12:55 ha-579393 kubelet[1963]: E1014 20:12:55.477777    1963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-ha-579393" podUID="8c15ab9dd5834e64ae44874faddf585d"
	Oct 14 20:12:57 ha-579393 kubelet[1963]: E1014 20:12:57.452628    1963 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-579393\" not found" node="ha-579393"
	Oct 14 20:12:57 ha-579393 kubelet[1963]: E1014 20:12:57.452751    1963 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-579393\" not found" node="ha-579393"
	Oct 14 20:12:57 ha-579393 kubelet[1963]: E1014 20:12:57.487429    1963 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 14 20:12:57 ha-579393 kubelet[1963]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 14 20:12:57 ha-579393 kubelet[1963]:  > podSandboxID="aaede030549f8967d5aa233537563148ce2bbd3af1fde92787bd937fe5f1c93d"
	Oct 14 20:12:57 ha-579393 kubelet[1963]: E1014 20:12:57.487530    1963 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 14 20:12:57 ha-579393 kubelet[1963]:         container kube-controller-manager start failed in pod kube-controller-manager-ha-579393_kube-system(514451ea1eb9e52e24cc36daace2ea4a): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 14 20:12:57 ha-579393 kubelet[1963]:  > logger="UnhandledError"
	Oct 14 20:12:57 ha-579393 kubelet[1963]: E1014 20:12:57.487567    1963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-579393" podUID="514451ea1eb9e52e24cc36daace2ea4a"
	Oct 14 20:12:57 ha-579393 kubelet[1963]: E1014 20:12:57.487619    1963 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 14 20:12:57 ha-579393 kubelet[1963]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 14 20:12:57 ha-579393 kubelet[1963]:  > podSandboxID="26eabc9a05c338cff1ebd4ea1b580692dcb1accc6b0e23f61f6a228d1f73adce"
	Oct 14 20:12:57 ha-579393 kubelet[1963]: E1014 20:12:57.487709    1963 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 14 20:12:57 ha-579393 kubelet[1963]:         container kube-apiserver start failed in pod kube-apiserver-ha-579393_kube-system(06869f9c490a5fffb50940ac23939d18): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 14 20:12:57 ha-579393 kubelet[1963]:  > logger="UnhandledError"
	Oct 14 20:12:57 ha-579393 kubelet[1963]: E1014 20:12:57.488839    1963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-ha-579393" podUID="06869f9c490a5fffb50940ac23939d18"
	Oct 14 20:12:58 ha-579393 kubelet[1963]: E1014 20:12:58.640597    1963 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-579393.186e746019ae0b94  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-579393,UID:ha-579393,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-579393 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-579393,},FirstTimestamp:2025-10-14 20:07:40.444961684 +0000 UTC m=+0.732526825,LastTimestamp:2025-10-14 20:07:40.444961684 +0000 UTC m=+0.732526825,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-579393,}"
	Oct 14 20:13:00 ha-579393 kubelet[1963]: E1014 20:13:00.037350    1963 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://192.168.49.2:8443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError"
	Oct 14 20:13:00 ha-579393 kubelet[1963]: E1014 20:13:00.473717    1963 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-579393\" not found"
	Oct 14 20:13:00 ha-579393 kubelet[1963]: E1014 20:13:00.508888    1963 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://192.168.49.2:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass"
	Oct 14 20:13:01 ha-579393 kubelet[1963]: E1014 20:13:01.091624    1963 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-579393?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 14 20:13:01 ha-579393 kubelet[1963]: I1014 20:13:01.263563    1963 kubelet_node_status.go:75] "Attempting to register node" node="ha-579393"
	Oct 14 20:13:01 ha-579393 kubelet[1963]: E1014 20:13:01.263995    1963 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-579393"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-579393 -n ha-579393
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-579393 -n ha-579393: exit status 6 (308.828641ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1014 20:13:02.053889  477281 status.go:458] kubeconfig endpoint: get endpoint: "ha-579393" does not appear in /home/jenkins/minikube-integration/21409-413763/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "ha-579393" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeployApp (79.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-579393 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-579393 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (93.526436ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-579393"

                                                
                                                
** /stderr **
ha_test.go:201: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/PingHostFromPods]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/PingHostFromPods]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-579393
helpers_test.go:243: (dbg) docker inspect ha-579393:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2",
	        "Created": "2025-10-14T20:03:22.416166993Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 471115,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-14T20:03:22.453114626Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2/hostname",
	        "HostsPath": "/var/lib/docker/containers/e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2/hosts",
	        "LogPath": "/var/lib/docker/containers/e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2/e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2-json.log",
	        "Name": "/ha-579393",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-579393:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-579393",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2",
	                "LowerDir": "/var/lib/docker/overlay2/f2862f7f555e5163c5fb738e59e646f87e5b6f6dc38621bbd5cf8a3ccf2dc40d-init/diff:/var/lib/docker/overlay2/51cca15559205c73df9571f03495ed8a29f085405673af5fef2c1ba7c695d8af/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f2862f7f555e5163c5fb738e59e646f87e5b6f6dc38621bbd5cf8a3ccf2dc40d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f2862f7f555e5163c5fb738e59e646f87e5b6f6dc38621bbd5cf8a3ccf2dc40d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f2862f7f555e5163c5fb738e59e646f87e5b6f6dc38621bbd5cf8a3ccf2dc40d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-579393",
	                "Source": "/var/lib/docker/volumes/ha-579393/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-579393",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-579393",
	                "name.minikube.sigs.k8s.io": "ha-579393",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a1c2b52fae2ff440ed705eb97dd81ba6bb6415972c195c1ca3bec92d8e7f50f0",
	            "SandboxKey": "/var/run/docker/netns/a1c2b52fae2f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32903"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32904"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32907"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32905"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32906"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-579393": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "92:ce:80:cd:a9:2b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d686e445c44d566d468791372c74214f3cfc87adaf2fbedd68822ff7d3ce8955",
	                    "EndpointID": "9e96e7d478fc0073b7c8e78f8945763db207596a9030627a1780b04c90be2b93",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-579393",
	                        "e999454f4483"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-579393 -n ha-579393
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-579393 -n ha-579393: exit status 6 (303.618224ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1014 20:13:02.471573  477428 status.go:458] kubeconfig endpoint: get endpoint: "ha-579393" does not appear in /home/jenkins/minikube-integration/21409-413763/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/PingHostFromPods FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/PingHostFromPods]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-579393 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/PingHostFromPods logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                      ARGS                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-744288 ssh pgrep buildkitd                                                                           │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │                     │
	│ update-context │ functional-744288 update-context --alsologtostderr -v=2                                                         │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ update-context │ functional-744288 update-context --alsologtostderr -v=2                                                         │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ image          │ functional-744288 image build -t localhost/my-image:functional-744288 testdata/build --alsologtostderr          │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ update-context │ functional-744288 update-context --alsologtostderr -v=2                                                         │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ image          │ functional-744288 image ls                                                                                      │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ delete         │ -p functional-744288                                                                                            │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 20:03 UTC │ 14 Oct 25 20:03 UTC │
	│ start          │ ha-579393 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:03 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml                                                │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- rollout status deployment/busybox                                                          │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:12 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:12 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:12 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- exec  -- nslookup kubernetes.io                                                            │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- exec  -- nslookup kubernetes.default                                                       │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                                     │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/14 20:03:17
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1014 20:03:17.125360  470544 out.go:360] Setting OutFile to fd 1 ...
	I1014 20:03:17.125666  470544 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:03:17.125678  470544 out.go:374] Setting ErrFile to fd 2...
	I1014 20:03:17.125685  470544 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:03:17.125940  470544 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-413763/.minikube/bin
	I1014 20:03:17.126490  470544 out.go:368] Setting JSON to false
	I1014 20:03:17.127467  470544 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":9943,"bootTime":1760462254,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1014 20:03:17.127588  470544 start.go:141] virtualization: kvm guest
	I1014 20:03:17.129767  470544 out.go:179] * [ha-579393] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1014 20:03:17.131241  470544 out.go:179]   - MINIKUBE_LOCATION=21409
	I1014 20:03:17.131264  470544 notify.go:220] Checking for updates...
	I1014 20:03:17.134306  470544 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 20:03:17.135806  470544 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-413763/kubeconfig
	I1014 20:03:17.137119  470544 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-413763/.minikube
	I1014 20:03:17.138379  470544 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1014 20:03:17.140082  470544 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 20:03:17.141662  470544 driver.go:421] Setting default libvirt URI to qemu:///system
	I1014 20:03:17.165916  470544 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1014 20:03:17.166098  470544 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 20:03:17.229548  470544 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-14 20:03:17.218250431 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1014 20:03:17.229650  470544 docker.go:318] overlay module found
	I1014 20:03:17.231449  470544 out.go:179] * Using the docker driver based on user configuration
	I1014 20:03:17.232741  470544 start.go:305] selected driver: docker
	I1014 20:03:17.232773  470544 start.go:925] validating driver "docker" against <nil>
	I1014 20:03:17.232790  470544 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 20:03:17.233313  470544 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 20:03:17.295257  470544 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-14 20:03:17.284941769 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1014 20:03:17.295445  470544 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1014 20:03:17.295657  470544 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 20:03:17.297506  470544 out.go:179] * Using Docker driver with root privileges
	I1014 20:03:17.298873  470544 cni.go:84] Creating CNI manager for ""
	I1014 20:03:17.298932  470544 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1014 20:03:17.298947  470544 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1014 20:03:17.299040  470544 start.go:349] cluster config:
	{Name:ha-579393 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-579393 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPaus
eInterval:1m0s}
	I1014 20:03:17.300487  470544 out.go:179] * Starting "ha-579393" primary control-plane node in "ha-579393" cluster
	I1014 20:03:17.301710  470544 cache.go:123] Beginning downloading kic base image for docker with crio
	I1014 20:03:17.302965  470544 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1014 20:03:17.304134  470544 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 20:03:17.304173  470544 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-413763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1014 20:03:17.304183  470544 cache.go:58] Caching tarball of preloaded images
	I1014 20:03:17.304233  470544 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1014 20:03:17.304269  470544 preload.go:233] Found /home/jenkins/minikube-integration/21409-413763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1014 20:03:17.304279  470544 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1014 20:03:17.304557  470544 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/config.json ...
	I1014 20:03:17.304580  470544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/config.json: {Name:mk533f81ade9d1a5f526dccc10d22b964ab1abab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:03:17.326336  470544 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1014 20:03:17.326357  470544 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1014 20:03:17.326374  470544 cache.go:232] Successfully downloaded all kic artifacts
	I1014 20:03:17.326399  470544 start.go:360] acquireMachinesLock for ha-579393: {Name:mk1f05a584fb3418ecbc78031a467c0e06c7a892 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 20:03:17.327173  470544 start.go:364] duration metric: took 757.56µs to acquireMachinesLock for "ha-579393"
	I1014 20:03:17.327207  470544 start.go:93] Provisioning new machine with config: &{Name:ha-579393 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-579393 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 20:03:17.327266  470544 start.go:125] createHost starting for "" (driver="docker")
	I1014 20:03:17.329132  470544 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1014 20:03:17.329332  470544 start.go:159] libmachine.API.Create for "ha-579393" (driver="docker")
	I1014 20:03:17.329358  470544 client.go:168] LocalClient.Create starting
	I1014 20:03:17.329426  470544 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem
	I1014 20:03:17.329458  470544 main.go:141] libmachine: Decoding PEM data...
	I1014 20:03:17.329469  470544 main.go:141] libmachine: Parsing certificate...
	I1014 20:03:17.329531  470544 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem
	I1014 20:03:17.329556  470544 main.go:141] libmachine: Decoding PEM data...
	I1014 20:03:17.329563  470544 main.go:141] libmachine: Parsing certificate...
	I1014 20:03:17.329904  470544 cli_runner.go:164] Run: docker network inspect ha-579393 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1014 20:03:17.347467  470544 cli_runner.go:211] docker network inspect ha-579393 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1014 20:03:17.347535  470544 network_create.go:284] running [docker network inspect ha-579393] to gather additional debugging logs...
	I1014 20:03:17.347555  470544 cli_runner.go:164] Run: docker network inspect ha-579393
	W1014 20:03:17.364018  470544 cli_runner.go:211] docker network inspect ha-579393 returned with exit code 1
	I1014 20:03:17.364049  470544 network_create.go:287] error running [docker network inspect ha-579393]: docker network inspect ha-579393: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-579393 not found
	I1014 20:03:17.364062  470544 network_create.go:289] output of [docker network inspect ha-579393]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-579393 not found
	
	** /stderr **
	I1014 20:03:17.364179  470544 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1014 20:03:17.381335  470544 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001946000}
	I1014 20:03:17.381374  470544 network_create.go:124] attempt to create docker network ha-579393 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1014 20:03:17.381422  470544 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-579393 ha-579393
	I1014 20:03:17.438306  470544 network_create.go:108] docker network ha-579393 192.168.49.0/24 created
	I1014 20:03:17.438342  470544 kic.go:121] calculated static IP "192.168.49.2" for the "ha-579393" container
	I1014 20:03:17.438422  470544 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1014 20:03:17.455388  470544 cli_runner.go:164] Run: docker volume create ha-579393 --label name.minikube.sigs.k8s.io=ha-579393 --label created_by.minikube.sigs.k8s.io=true
	I1014 20:03:17.474494  470544 oci.go:103] Successfully created a docker volume ha-579393
	I1014 20:03:17.474585  470544 cli_runner.go:164] Run: docker run --rm --name ha-579393-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-579393 --entrypoint /usr/bin/test -v ha-579393:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1014 20:03:17.868197  470544 oci.go:107] Successfully prepared a docker volume ha-579393
	I1014 20:03:17.868264  470544 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 20:03:17.868291  470544 kic.go:194] Starting extracting preloaded images to volume ...
	I1014 20:03:17.868380  470544 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21409-413763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-579393:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	I1014 20:03:22.341626  470544 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21409-413763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-579393:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (4.473193247s)
	I1014 20:03:22.341663  470544 kic.go:203] duration metric: took 4.47336734s to extract preloaded images to volume ...
	W1014 20:03:22.341815  470544 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1014 20:03:22.341863  470544 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1014 20:03:22.341916  470544 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1014 20:03:22.400050  470544 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-579393 --name ha-579393 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-579393 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-579393 --network ha-579393 --ip 192.168.49.2 --volume ha-579393:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1014 20:03:22.677726  470544 cli_runner.go:164] Run: docker container inspect ha-579393 --format={{.State.Running}}
	I1014 20:03:22.696026  470544 cli_runner.go:164] Run: docker container inspect ha-579393 --format={{.State.Status}}
	I1014 20:03:22.715378  470544 cli_runner.go:164] Run: docker exec ha-579393 stat /var/lib/dpkg/alternatives/iptables
	I1014 20:03:22.762223  470544 oci.go:144] the created container "ha-579393" has a running status.
	I1014 20:03:22.762255  470544 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa...
	I1014 20:03:22.820780  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1014 20:03:22.820832  470544 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1014 20:03:22.850515  470544 cli_runner.go:164] Run: docker container inspect ha-579393 --format={{.State.Status}}
	I1014 20:03:22.870190  470544 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1014 20:03:22.870210  470544 kic_runner.go:114] Args: [docker exec --privileged ha-579393 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1014 20:03:22.912447  470544 cli_runner.go:164] Run: docker container inspect ha-579393 --format={{.State.Status}}
	I1014 20:03:22.934356  470544 machine.go:93] provisionDockerMachine start ...
	I1014 20:03:22.934472  470544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:03:22.954394  470544 main.go:141] libmachine: Using SSH client type: native
	I1014 20:03:22.954768  470544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32903 <nil> <nil>}
	I1014 20:03:22.954796  470544 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 20:03:22.955439  470544 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50642->127.0.0.1:32903: read: connection reset by peer
	I1014 20:03:26.104260  470544 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-579393
	
	I1014 20:03:26.104298  470544 ubuntu.go:182] provisioning hostname "ha-579393"
	I1014 20:03:26.104379  470544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:03:26.122921  470544 main.go:141] libmachine: Using SSH client type: native
	I1014 20:03:26.123167  470544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32903 <nil> <nil>}
	I1014 20:03:26.123185  470544 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-579393 && echo "ha-579393" | sudo tee /etc/hostname
	I1014 20:03:26.281180  470544 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-579393
	
	I1014 20:03:26.281286  470544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:03:26.299367  470544 main.go:141] libmachine: Using SSH client type: native
	I1014 20:03:26.299579  470544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32903 <nil> <nil>}
	I1014 20:03:26.299596  470544 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-579393' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-579393/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-579393' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 20:03:26.445909  470544 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 20:03:26.445941  470544 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-413763/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-413763/.minikube}
	I1014 20:03:26.445960  470544 ubuntu.go:190] setting up certificates
	I1014 20:03:26.445974  470544 provision.go:84] configureAuth start
	I1014 20:03:26.446042  470544 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-579393
	I1014 20:03:26.463014  470544 provision.go:143] copyHostCerts
	I1014 20:03:26.463059  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem
	I1014 20:03:26.463090  470544 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem, removing ...
	I1014 20:03:26.463099  470544 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem
	I1014 20:03:26.463169  470544 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem (1078 bytes)
	I1014 20:03:26.463255  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem
	I1014 20:03:26.463272  470544 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem, removing ...
	I1014 20:03:26.463279  470544 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem
	I1014 20:03:26.463304  470544 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem (1123 bytes)
	I1014 20:03:26.463350  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem
	I1014 20:03:26.463367  470544 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem, removing ...
	I1014 20:03:26.463373  470544 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem
	I1014 20:03:26.463396  470544 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem (1675 bytes)
	I1014 20:03:26.463447  470544 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca-key.pem org=jenkins.ha-579393 san=[127.0.0.1 192.168.49.2 ha-579393 localhost minikube]
	I1014 20:03:26.617910  470544 provision.go:177] copyRemoteCerts
	I1014 20:03:26.617976  470544 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 20:03:26.618022  470544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:03:26.636120  470544 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:03:26.739380  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1014 20:03:26.739452  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 20:03:26.759232  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1014 20:03:26.759293  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1014 20:03:26.778271  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1014 20:03:26.778338  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1014 20:03:26.796388  470544 provision.go:87] duration metric: took 350.39932ms to configureAuth
	I1014 20:03:26.796420  470544 ubuntu.go:206] setting minikube options for container-runtime
	I1014 20:03:26.796596  470544 config.go:182] Loaded profile config "ha-579393": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:03:26.796705  470544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:03:26.816035  470544 main.go:141] libmachine: Using SSH client type: native
	I1014 20:03:26.816243  470544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32903 <nil> <nil>}
	I1014 20:03:26.816259  470544 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 20:03:27.082126  470544 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 20:03:27.082156  470544 machine.go:96] duration metric: took 4.147772563s to provisionDockerMachine
	I1014 20:03:27.082171  470544 client.go:171] duration metric: took 9.752806403s to LocalClient.Create
	I1014 20:03:27.082197  470544 start.go:167] duration metric: took 9.752866506s to libmachine.API.Create "ha-579393"
	I1014 20:03:27.082205  470544 start.go:293] postStartSetup for "ha-579393" (driver="docker")
	I1014 20:03:27.082215  470544 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 20:03:27.082274  470544 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 20:03:27.082316  470544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:03:27.101460  470544 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:03:27.208078  470544 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 20:03:27.212053  470544 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1014 20:03:27.212086  470544 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1014 20:03:27.212100  470544 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-413763/.minikube/addons for local assets ...
	I1014 20:03:27.212182  470544 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-413763/.minikube/files for local assets ...
	I1014 20:03:27.212277  470544 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem -> 4173732.pem in /etc/ssl/certs
	I1014 20:03:27.212288  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem -> /etc/ssl/certs/4173732.pem
	I1014 20:03:27.212396  470544 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 20:03:27.220472  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem --> /etc/ssl/certs/4173732.pem (1708 bytes)
	I1014 20:03:27.241576  470544 start.go:296] duration metric: took 159.355524ms for postStartSetup
	I1014 20:03:27.241976  470544 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-579393
	I1014 20:03:27.259468  470544 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/config.json ...
	I1014 20:03:27.259849  470544 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1014 20:03:27.259907  470544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:03:27.277799  470544 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:03:27.378323  470544 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1014 20:03:27.383519  470544 start.go:128] duration metric: took 10.056234444s to createHost
	I1014 20:03:27.383548  470544 start.go:83] releasing machines lock for "ha-579393", held for 10.056356237s
	I1014 20:03:27.383629  470544 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-579393
	I1014 20:03:27.401699  470544 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 20:03:27.401709  470544 ssh_runner.go:195] Run: cat /version.json
	I1014 20:03:27.401815  470544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:03:27.401838  470544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:03:27.420176  470544 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:03:27.421057  470544 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:03:27.574708  470544 ssh_runner.go:195] Run: systemctl --version
	I1014 20:03:27.581776  470544 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 20:03:27.618049  470544 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 20:03:27.622981  470544 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 20:03:27.623059  470544 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 20:03:27.650696  470544 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1014 20:03:27.650726  470544 start.go:495] detecting cgroup driver to use...
	I1014 20:03:27.650795  470544 detect.go:190] detected "systemd" cgroup driver on host os
	I1014 20:03:27.650860  470544 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 20:03:27.668397  470544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 20:03:27.681391  470544 docker.go:218] disabling cri-docker service (if available) ...
	I1014 20:03:27.681446  470544 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 20:03:27.698246  470544 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 20:03:27.716479  470544 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 20:03:27.798818  470544 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 20:03:27.884317  470544 docker.go:234] disabling docker service ...
	I1014 20:03:27.884384  470544 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 20:03:27.905126  470544 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 20:03:27.918827  470544 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 20:03:28.002081  470544 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 20:03:28.084842  470544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 20:03:28.098220  470544 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 20:03:28.113305  470544 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1014 20:03:28.113364  470544 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:03:28.124477  470544 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1014 20:03:28.124559  470544 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:03:28.134261  470544 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:03:28.144071  470544 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:03:28.154359  470544 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 20:03:28.163636  470544 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:03:28.173644  470544 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:03:28.188326  470544 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:03:28.198228  470544 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 20:03:28.206234  470544 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 20:03:28.214019  470544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 20:03:28.295010  470544 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 20:03:28.401206  470544 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 20:03:28.401272  470544 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 20:03:28.405522  470544 start.go:563] Will wait 60s for crictl version
	I1014 20:03:28.405585  470544 ssh_runner.go:195] Run: which crictl
	I1014 20:03:28.409373  470544 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1014 20:03:28.435266  470544 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1014 20:03:28.435335  470544 ssh_runner.go:195] Run: crio --version
	I1014 20:03:28.465834  470544 ssh_runner.go:195] Run: crio --version
	I1014 20:03:28.497274  470544 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1014 20:03:28.498593  470544 cli_runner.go:164] Run: docker network inspect ha-579393 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1014 20:03:28.517029  470544 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1014 20:03:28.521498  470544 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 20:03:28.532817  470544 kubeadm.go:883] updating cluster {Name:ha-579393 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-579393 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 20:03:28.532940  470544 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 20:03:28.532992  470544 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 20:03:28.565925  470544 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 20:03:28.565951  470544 crio.go:433] Images already preloaded, skipping extraction
	I1014 20:03:28.566006  470544 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 20:03:28.592978  470544 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 20:03:28.593003  470544 cache_images.go:85] Images are preloaded, skipping loading
	I1014 20:03:28.593011  470544 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1014 20:03:28.593109  470544 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-579393 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-579393 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 20:03:28.593172  470544 ssh_runner.go:195] Run: crio config
	I1014 20:03:28.638570  470544 cni.go:84] Creating CNI manager for ""
	I1014 20:03:28.638590  470544 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1014 20:03:28.638604  470544 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1014 20:03:28.638626  470544 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-579393 NodeName:ha-579393 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1014 20:03:28.638736  470544 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-579393"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 20:03:28.638778  470544 kube-vip.go:115] generating kube-vip config ...
	I1014 20:03:28.638827  470544 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1014 20:03:28.651221  470544 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1014 20:03:28.651322  470544 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1014 20:03:28.651371  470544 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1014 20:03:28.659733  470544 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 20:03:28.659825  470544 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1014 20:03:28.667977  470544 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1014 20:03:28.681172  470544 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 20:03:28.697239  470544 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1014 20:03:28.710080  470544 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I1014 20:03:28.724688  470544 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1014 20:03:28.728568  470544 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 20:03:28.738656  470544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 20:03:28.817749  470544 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 20:03:28.841528  470544 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393 for IP: 192.168.49.2
	I1014 20:03:28.841566  470544 certs.go:195] generating shared ca certs ...
	I1014 20:03:28.841587  470544 certs.go:227] acquiring lock for ca certs: {Name:mk332cc83f1e1747534fc81e896bed94f24941d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:03:28.841727  470544 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.key
	I1014 20:03:28.841805  470544 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.key
	I1014 20:03:28.841821  470544 certs.go:257] generating profile certs ...
	I1014 20:03:28.841874  470544 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/client.key
	I1014 20:03:28.841897  470544 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/client.crt with IP's: []
	I1014 20:03:29.018063  470544 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/client.crt ...
	I1014 20:03:29.018101  470544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/client.crt: {Name:mk8b90bc05b294b6c05e808012d45472c3093f67 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:03:29.018299  470544 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/client.key ...
	I1014 20:03:29.018321  470544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/client.key: {Name:mk4670db425ebf46f3bf4968573343a975480683 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:03:29.018407  470544 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key.6d4a1612
	I1014 20:03:29.018424  470544 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt.6d4a1612 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I1014 20:03:29.208082  470544 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt.6d4a1612 ...
	I1014 20:03:29.208118  470544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt.6d4a1612: {Name:mk2e48e06bd7a0fd2aa3ea9def795ac03bded956 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:03:29.208287  470544 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key.6d4a1612 ...
	I1014 20:03:29.208300  470544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key.6d4a1612: {Name:mkc6fe9b4a3330b4fa61a71beeb137e948294421 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:03:29.209199  470544 certs.go:382] copying /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt.6d4a1612 -> /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt
	I1014 20:03:29.209315  470544 certs.go:386] copying /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key.6d4a1612 -> /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key
	I1014 20:03:29.209373  470544 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.key
	I1014 20:03:29.209389  470544 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.crt with IP's: []
	I1014 20:03:29.349734  470544 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.crt ...
	I1014 20:03:29.349788  470544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.crt: {Name:mk3c38e66fa21f9bf9f031b0b611fbb1d8c4882a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:03:29.349962  470544 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.key ...
	I1014 20:03:29.349973  470544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.key: {Name:mke28e4de33c7a0d50feb0b1335c5cd9e94d81db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:03:29.350047  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1014 20:03:29.350064  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1014 20:03:29.350075  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1014 20:03:29.350087  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1014 20:03:29.350099  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1014 20:03:29.350109  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1014 20:03:29.350122  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1014 20:03:29.350132  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1014 20:03:29.350183  470544 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/417373.pem (1338 bytes)
	W1014 20:03:29.350228  470544 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-413763/.minikube/certs/417373_empty.pem, impossibly tiny 0 bytes
	I1014 20:03:29.350237  470544 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca-key.pem (1675 bytes)
	I1014 20:03:29.350258  470544 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem (1078 bytes)
	I1014 20:03:29.350280  470544 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem (1123 bytes)
	I1014 20:03:29.350300  470544 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem (1675 bytes)
	I1014 20:03:29.350336  470544 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem (1708 bytes)
	I1014 20:03:29.350360  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:03:29.350373  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/417373.pem -> /usr/share/ca-certificates/417373.pem
	I1014 20:03:29.350387  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem -> /usr/share/ca-certificates/4173732.pem
	I1014 20:03:29.350927  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 20:03:29.369482  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1014 20:03:29.386797  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 20:03:29.404413  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1014 20:03:29.421955  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1014 20:03:29.439808  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1014 20:03:29.457222  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 20:03:29.475143  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1014 20:03:29.493300  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 20:03:29.513957  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/certs/417373.pem --> /usr/share/ca-certificates/417373.pem (1338 bytes)
	I1014 20:03:29.535163  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem --> /usr/share/ca-certificates/4173732.pem (1708 bytes)
	I1014 20:03:29.554358  470544 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 20:03:29.567671  470544 ssh_runner.go:195] Run: openssl version
	I1014 20:03:29.574116  470544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4173732.pem && ln -fs /usr/share/ca-certificates/4173732.pem /etc/ssl/certs/4173732.pem"
	I1014 20:03:29.582980  470544 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4173732.pem
	I1014 20:03:29.586713  470544 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 19:32 /usr/share/ca-certificates/4173732.pem
	I1014 20:03:29.586836  470544 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4173732.pem
	I1014 20:03:29.620973  470544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4173732.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 20:03:29.629990  470544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 20:03:29.638580  470544 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:03:29.642541  470544 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 19:15 /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:03:29.642595  470544 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:03:29.677097  470544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 20:03:29.687002  470544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/417373.pem && ln -fs /usr/share/ca-certificates/417373.pem /etc/ssl/certs/417373.pem"
	I1014 20:03:29.696267  470544 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/417373.pem
	I1014 20:03:29.700535  470544 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 19:32 /usr/share/ca-certificates/417373.pem
	I1014 20:03:29.700593  470544 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/417373.pem
	I1014 20:03:29.734895  470544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/417373.pem /etc/ssl/certs/51391683.0"
	I1014 20:03:29.744295  470544 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 20:03:29.748240  470544 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1014 20:03:29.748305  470544 kubeadm.go:400] StartCluster: {Name:ha-579393 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-579393 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 20:03:29.748380  470544 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 20:03:29.748448  470544 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 20:03:29.777054  470544 cri.go:89] found id: ""
	I1014 20:03:29.777134  470544 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 20:03:29.785507  470544 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 20:03:29.793651  470544 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1014 20:03:29.793711  470544 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 20:03:29.801881  470544 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 20:03:29.801906  470544 kubeadm.go:157] found existing configuration files:
	
	I1014 20:03:29.801956  470544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 20:03:29.809948  470544 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 20:03:29.810011  470544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 20:03:29.817979  470544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 20:03:29.825985  470544 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 20:03:29.826064  470544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 20:03:29.833833  470544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 20:03:29.842078  470544 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 20:03:29.842149  470544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 20:03:29.850122  470544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 20:03:29.858250  470544 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 20:03:29.858312  470544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 20:03:29.866004  470544 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1014 20:03:29.905901  470544 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1014 20:03:29.906013  470544 kubeadm.go:318] [preflight] Running pre-flight checks
	I1014 20:03:29.928412  470544 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1014 20:03:29.928498  470544 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1014 20:03:29.928541  470544 kubeadm.go:318] OS: Linux
	I1014 20:03:29.928583  470544 kubeadm.go:318] CGROUPS_CPU: enabled
	I1014 20:03:29.928652  470544 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1014 20:03:29.928730  470544 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1014 20:03:29.928805  470544 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1014 20:03:29.928849  470544 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1014 20:03:29.928892  470544 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1014 20:03:29.928935  470544 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1014 20:03:29.928973  470544 kubeadm.go:318] CGROUPS_IO: enabled
	I1014 20:03:29.989181  470544 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1014 20:03:29.989342  470544 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1014 20:03:29.989457  470544 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1014 20:03:29.997476  470544 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1014 20:03:30.000428  470544 out.go:252]   - Generating certificates and keys ...
	I1014 20:03:30.000531  470544 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1014 20:03:30.000656  470544 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1014 20:03:30.367367  470544 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1014 20:03:30.888441  470544 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1014 20:03:31.416284  470544 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1014 20:03:31.486302  470544 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1014 20:03:32.293304  470544 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1014 20:03:32.293457  470544 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [ha-579393 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1014 20:03:32.436942  470544 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1014 20:03:32.437134  470544 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [ha-579393 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1014 20:03:32.740861  470544 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1014 20:03:32.874202  470544 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1014 20:03:33.330864  470544 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1014 20:03:33.330961  470544 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1014 20:03:33.434687  470544 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1014 20:03:33.590351  470544 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1014 20:03:33.928031  470544 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1014 20:03:34.042691  470544 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1014 20:03:34.576186  470544 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1014 20:03:34.576637  470544 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1014 20:03:34.579016  470544 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1014 20:03:34.581440  470544 out.go:252]   - Booting up control plane ...
	I1014 20:03:34.581593  470544 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1014 20:03:34.581712  470544 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1014 20:03:34.581832  470544 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1014 20:03:34.595204  470544 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1014 20:03:34.595404  470544 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1014 20:03:34.601919  470544 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1014 20:03:34.602142  470544 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1014 20:03:34.602243  470544 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1014 20:03:34.699612  470544 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1014 20:03:34.699737  470544 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1014 20:03:35.200483  470544 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 500.970561ms
	I1014 20:03:35.205501  470544 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1014 20:03:35.205636  470544 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1014 20:03:35.205873  470544 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1014 20:03:35.205987  470544 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1014 20:07:35.206930  470544 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000676337s
	I1014 20:07:35.207172  470544 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000720088s
	I1014 20:07:35.207359  470544 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000743417s
	I1014 20:07:35.207371  470544 kubeadm.go:318] 
	I1014 20:07:35.207694  470544 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1014 20:07:35.208049  470544 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1014 20:07:35.208276  470544 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1014 20:07:35.208532  470544 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1014 20:07:35.208786  470544 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1014 20:07:35.209074  470544 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1014 20:07:35.209100  470544 kubeadm.go:318] 
	I1014 20:07:35.211976  470544 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1014 20:07:35.212145  470544 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1014 20:07:35.212706  470544 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1014 20:07:35.212843  470544 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	W1014 20:07:35.212972  470544 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-579393 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-579393 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 500.970561ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000676337s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000720088s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000743417s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1014 20:07:35.213050  470544 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1014 20:07:37.966951  470544 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.753873381s)
	I1014 20:07:37.967030  470544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 20:07:37.980538  470544 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1014 20:07:37.980613  470544 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 20:07:37.988822  470544 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 20:07:37.988844  470544 kubeadm.go:157] found existing configuration files:
	
	I1014 20:07:37.988897  470544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 20:07:37.996970  470544 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 20:07:37.997051  470544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 20:07:38.004797  470544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 20:07:38.012635  470544 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 20:07:38.012702  470544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 20:07:38.020175  470544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 20:07:38.028386  470544 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 20:07:38.028440  470544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 20:07:38.036154  470544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 20:07:38.044027  470544 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 20:07:38.044088  470544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 20:07:38.051422  470544 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1014 20:07:38.110505  470544 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1014 20:07:38.170186  470544 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1014 20:11:40.721242  470544 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1014 20:11:40.721491  470544 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1014 20:11:40.724650  470544 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1014 20:11:40.724789  470544 kubeadm.go:318] [preflight] Running pre-flight checks
	I1014 20:11:40.724937  470544 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1014 20:11:40.725018  470544 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1014 20:11:40.725068  470544 kubeadm.go:318] OS: Linux
	I1014 20:11:40.725125  470544 kubeadm.go:318] CGROUPS_CPU: enabled
	I1014 20:11:40.725181  470544 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1014 20:11:40.725248  470544 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1014 20:11:40.725310  470544 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1014 20:11:40.725365  470544 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1014 20:11:40.725423  470544 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1014 20:11:40.725473  470544 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1014 20:11:40.725534  470544 kubeadm.go:318] CGROUPS_IO: enabled
	I1014 20:11:40.725639  470544 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1014 20:11:40.725782  470544 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1014 20:11:40.725977  470544 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1014 20:11:40.726087  470544 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1014 20:11:40.728584  470544 out.go:252]   - Generating certificates and keys ...
	I1014 20:11:40.728668  470544 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1014 20:11:40.728723  470544 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1014 20:11:40.728820  470544 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1014 20:11:40.728895  470544 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1014 20:11:40.728974  470544 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1014 20:11:40.729051  470544 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1014 20:11:40.729150  470544 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1014 20:11:40.729214  470544 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1014 20:11:40.729282  470544 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1014 20:11:40.729340  470544 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1014 20:11:40.729378  470544 kubeadm.go:318] [certs] Using the existing "sa" key
	I1014 20:11:40.729422  470544 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1014 20:11:40.729466  470544 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1014 20:11:40.729531  470544 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1014 20:11:40.729604  470544 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1014 20:11:40.729710  470544 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1014 20:11:40.729805  470544 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1014 20:11:40.729913  470544 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1014 20:11:40.730020  470544 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1014 20:11:40.731279  470544 out.go:252]   - Booting up control plane ...
	I1014 20:11:40.731376  470544 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1014 20:11:40.731472  470544 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1014 20:11:40.731563  470544 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1014 20:11:40.731676  470544 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1014 20:11:40.731820  470544 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1014 20:11:40.731960  470544 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1014 20:11:40.732060  470544 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1014 20:11:40.732099  470544 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1014 20:11:40.732241  470544 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1014 20:11:40.732368  470544 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1014 20:11:40.732459  470544 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001170855s
	I1014 20:11:40.732550  470544 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1014 20:11:40.732649  470544 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1014 20:11:40.732789  470544 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1014 20:11:40.732875  470544 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1014 20:11:40.732961  470544 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000981646s
	I1014 20:11:40.733076  470544 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001242346s
	I1014 20:11:40.733142  470544 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001205382s
	I1014 20:11:40.733157  470544 kubeadm.go:318] 
	I1014 20:11:40.733272  470544 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1014 20:11:40.733349  470544 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1014 20:11:40.733417  470544 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1014 20:11:40.733491  470544 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1014 20:11:40.733553  470544 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1014 20:11:40.733641  470544 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1014 20:11:40.733684  470544 kubeadm.go:318] 
	I1014 20:11:40.733748  470544 kubeadm.go:402] duration metric: took 8m10.985445817s to StartCluster
	I1014 20:11:40.733824  470544 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 20:11:40.733881  470544 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 20:11:40.762474  470544 cri.go:89] found id: ""
	I1014 20:11:40.762524  470544 logs.go:282] 0 containers: []
	W1014 20:11:40.762538  470544 logs.go:284] No container was found matching "kube-apiserver"
	I1014 20:11:40.762545  470544 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 20:11:40.762602  470544 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 20:11:40.789961  470544 cri.go:89] found id: ""
	I1014 20:11:40.789989  470544 logs.go:282] 0 containers: []
	W1014 20:11:40.789999  470544 logs.go:284] No container was found matching "etcd"
	I1014 20:11:40.790007  470544 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 20:11:40.790062  470544 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 20:11:40.817095  470544 cri.go:89] found id: ""
	I1014 20:11:40.817128  470544 logs.go:282] 0 containers: []
	W1014 20:11:40.817141  470544 logs.go:284] No container was found matching "coredns"
	I1014 20:11:40.817148  470544 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 20:11:40.817206  470544 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 20:11:40.843942  470544 cri.go:89] found id: ""
	I1014 20:11:40.843974  470544 logs.go:282] 0 containers: []
	W1014 20:11:40.843984  470544 logs.go:284] No container was found matching "kube-scheduler"
	I1014 20:11:40.843991  470544 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 20:11:40.844054  470544 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 20:11:40.870262  470544 cri.go:89] found id: ""
	I1014 20:11:40.870289  470544 logs.go:282] 0 containers: []
	W1014 20:11:40.870299  470544 logs.go:284] No container was found matching "kube-proxy"
	I1014 20:11:40.870308  470544 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 20:11:40.870377  470544 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 20:11:40.896558  470544 cri.go:89] found id: ""
	I1014 20:11:40.896588  470544 logs.go:282] 0 containers: []
	W1014 20:11:40.896597  470544 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 20:11:40.896604  470544 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 20:11:40.896660  470544 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 20:11:40.923171  470544 cri.go:89] found id: ""
	I1014 20:11:40.923202  470544 logs.go:282] 0 containers: []
	W1014 20:11:40.923214  470544 logs.go:284] No container was found matching "kindnet"
	I1014 20:11:40.923225  470544 logs.go:123] Gathering logs for kubelet ...
	I1014 20:11:40.923237  470544 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 20:11:40.991897  470544 logs.go:123] Gathering logs for dmesg ...
	I1014 20:11:40.991944  470544 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 20:11:41.010371  470544 logs.go:123] Gathering logs for describe nodes ...
	I1014 20:11:41.010404  470544 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 20:11:41.071387  470544 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 20:11:41.064404    2582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:11:41.064977    2582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:11:41.066171    2582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:11:41.066648    2582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:11:41.068287    2582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 20:11:41.064404    2582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:11:41.064977    2582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:11:41.066171    2582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:11:41.066648    2582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:11:41.068287    2582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 20:11:41.071407  470544 logs.go:123] Gathering logs for CRI-O ...
	I1014 20:11:41.071419  470544 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 20:11:41.133347  470544 logs.go:123] Gathering logs for container status ...
	I1014 20:11:41.133392  470544 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1014 20:11:41.166639  470544 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001170855s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000981646s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001242346s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001205382s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1014 20:11:41.166697  470544 out.go:285] * 
	W1014 20:11:41.166793  470544 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001170855s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000981646s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001242346s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001205382s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1014 20:11:41.166813  470544 out.go:285] * 
	W1014 20:11:41.168436  470544 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 20:11:41.172303  470544 out.go:203] 
	W1014 20:11:41.173765  470544 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001170855s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000981646s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001242346s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001205382s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1014 20:11:41.173801  470544 out.go:285] * 
	I1014 20:11:41.176311  470544 out.go:203] 
	
	
	==> CRI-O <==
	Oct 14 20:12:55 ha-579393 crio[778]: time="2025-10-14T20:12:55.475028797Z" level=info msg="createCtr: removing container 6a372a51d54dc4a99bd65b1bd218c7ad755d88bd6a7f29ba9f9ec88dfe7464b1" id=9554bca4-6c59-4f80-aeee-73b1126c989d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:12:55 ha-579393 crio[778]: time="2025-10-14T20:12:55.475060215Z" level=info msg="createCtr: deleting container 6a372a51d54dc4a99bd65b1bd218c7ad755d88bd6a7f29ba9f9ec88dfe7464b1 from storage" id=9554bca4-6c59-4f80-aeee-73b1126c989d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:12:55 ha-579393 crio[778]: time="2025-10-14T20:12:55.477290398Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-ha-579393_kube-system_8c15ab9dd5834e64ae44874faddf585d_0" id=9554bca4-6c59-4f80-aeee-73b1126c989d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:12:57 ha-579393 crio[778]: time="2025-10-14T20:12:57.453139497Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=25acb935-6053-4dc4-8a00-c0f31525eed4 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 20:12:57 ha-579393 crio[778]: time="2025-10-14T20:12:57.453177802Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=80fdaac0-913c-4b9e-9748-df734a5fb57f name=/runtime.v1.ImageService/ImageStatus
	Oct 14 20:12:57 ha-579393 crio[778]: time="2025-10-14T20:12:57.453990016Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=81315333-2347-4ad7-a446-80adee999f7b name=/runtime.v1.ImageService/ImageStatus
	Oct 14 20:12:57 ha-579393 crio[778]: time="2025-10-14T20:12:57.454064007Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=2ce9826b-3e65-476d-b952-d7fea30fa2e0 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 20:12:57 ha-579393 crio[778]: time="2025-10-14T20:12:57.454968172Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-579393/kube-controller-manager" id=ba9a3535-5c46-4278-8b81-cf99a5fce5ee name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:12:57 ha-579393 crio[778]: time="2025-10-14T20:12:57.454968174Z" level=info msg="Creating container: kube-system/kube-apiserver-ha-579393/kube-apiserver" id=0e15fee7-fb21-4c78-b7c1-f4cdd6b3ab37 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:12:57 ha-579393 crio[778]: time="2025-10-14T20:12:57.455263729Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:12:57 ha-579393 crio[778]: time="2025-10-14T20:12:57.455263857Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:12:57 ha-579393 crio[778]: time="2025-10-14T20:12:57.460303725Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:12:57 ha-579393 crio[778]: time="2025-10-14T20:12:57.460785139Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:12:57 ha-579393 crio[778]: time="2025-10-14T20:12:57.461766752Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:12:57 ha-579393 crio[778]: time="2025-10-14T20:12:57.462288762Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:12:57 ha-579393 crio[778]: time="2025-10-14T20:12:57.48128067Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=ba9a3535-5c46-4278-8b81-cf99a5fce5ee name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:12:57 ha-579393 crio[778]: time="2025-10-14T20:12:57.48192273Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=0e15fee7-fb21-4c78-b7c1-f4cdd6b3ab37 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:12:57 ha-579393 crio[778]: time="2025-10-14T20:12:57.482937984Z" level=info msg="createCtr: deleting container ID 13267deb92140ae09a619e9f9948fd9715254273819dbe61130eaa9df39a2123 from idIndex" id=ba9a3535-5c46-4278-8b81-cf99a5fce5ee name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:12:57 ha-579393 crio[778]: time="2025-10-14T20:12:57.482979701Z" level=info msg="createCtr: removing container 13267deb92140ae09a619e9f9948fd9715254273819dbe61130eaa9df39a2123" id=ba9a3535-5c46-4278-8b81-cf99a5fce5ee name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:12:57 ha-579393 crio[778]: time="2025-10-14T20:12:57.483024499Z" level=info msg="createCtr: deleting container 13267deb92140ae09a619e9f9948fd9715254273819dbe61130eaa9df39a2123 from storage" id=ba9a3535-5c46-4278-8b81-cf99a5fce5ee name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:12:57 ha-579393 crio[778]: time="2025-10-14T20:12:57.48342172Z" level=info msg="createCtr: deleting container ID 1de87d6b72a75dded6420210657397b777ace5cdc49362907789517ac1f87bd8 from idIndex" id=0e15fee7-fb21-4c78-b7c1-f4cdd6b3ab37 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:12:57 ha-579393 crio[778]: time="2025-10-14T20:12:57.483468415Z" level=info msg="createCtr: removing container 1de87d6b72a75dded6420210657397b777ace5cdc49362907789517ac1f87bd8" id=0e15fee7-fb21-4c78-b7c1-f4cdd6b3ab37 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:12:57 ha-579393 crio[778]: time="2025-10-14T20:12:57.48351067Z" level=info msg="createCtr: deleting container 1de87d6b72a75dded6420210657397b777ace5cdc49362907789517ac1f87bd8 from storage" id=0e15fee7-fb21-4c78-b7c1-f4cdd6b3ab37 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:12:57 ha-579393 crio[778]: time="2025-10-14T20:12:57.487098353Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-579393_kube-system_514451ea1eb9e52e24cc36daace2ea4a_0" id=ba9a3535-5c46-4278-8b81-cf99a5fce5ee name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:12:57 ha-579393 crio[778]: time="2025-10-14T20:12:57.487389405Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-ha-579393_kube-system_06869f9c490a5fffb50940ac23939d18_0" id=0e15fee7-fb21-4c78-b7c1-f4cdd6b3ab37 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 20:13:03.069538    3185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:13:03.070063    3185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:13:03.071711    3185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:13:03.072246    3185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:13:03.073825    3185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 91 42 a8 46 f5 08 06
	[ +33.952077] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff aa 17 60 38 7c 63 08 06
	[  +0.961658] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ba 77 81 98 05 a1 08 06
	[  +0.043334] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 8a 2f 57 55 4c d7 08 06
	[  +4.681081] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f6 ac 51 a4 b2 5a 08 06
	[Oct14 19:04] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e6 d1 de e1 85 25 08 06
	[  +0.899784] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ae f2 fe 51 3d 95 08 06
	[  +0.040749] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 82 c6 36 89 b8 4a 08 06
	[  +4.162603] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c6 62 d5 5c 0e ef 08 06
	[ +30.801371] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 06 ae e5 28 c7 39 08 06
	[  +0.817897] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ae 5e 5e 90 62 1a 08 06
	[  +0.046959] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 96 8f 1b bd b9 9f 08 06
	[Oct14 19:05] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 56 59 c3 7c db c4 08 06
	
	
	==> kernel <==
	 20:13:03 up  2:55,  0 user,  load average: 0.15, 0.08, 0.50
	Linux ha-579393 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 14 20:12:55 ha-579393 kubelet[1963]:  > logger="UnhandledError"
	Oct 14 20:12:55 ha-579393 kubelet[1963]: E1014 20:12:55.477777    1963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-ha-579393" podUID="8c15ab9dd5834e64ae44874faddf585d"
	Oct 14 20:12:57 ha-579393 kubelet[1963]: E1014 20:12:57.452628    1963 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-579393\" not found" node="ha-579393"
	Oct 14 20:12:57 ha-579393 kubelet[1963]: E1014 20:12:57.452751    1963 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-579393\" not found" node="ha-579393"
	Oct 14 20:12:57 ha-579393 kubelet[1963]: E1014 20:12:57.487429    1963 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 14 20:12:57 ha-579393 kubelet[1963]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 14 20:12:57 ha-579393 kubelet[1963]:  > podSandboxID="aaede030549f8967d5aa233537563148ce2bbd3af1fde92787bd937fe5f1c93d"
	Oct 14 20:12:57 ha-579393 kubelet[1963]: E1014 20:12:57.487530    1963 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 14 20:12:57 ha-579393 kubelet[1963]:         container kube-controller-manager start failed in pod kube-controller-manager-ha-579393_kube-system(514451ea1eb9e52e24cc36daace2ea4a): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 14 20:12:57 ha-579393 kubelet[1963]:  > logger="UnhandledError"
	Oct 14 20:12:57 ha-579393 kubelet[1963]: E1014 20:12:57.487567    1963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-579393" podUID="514451ea1eb9e52e24cc36daace2ea4a"
	Oct 14 20:12:57 ha-579393 kubelet[1963]: E1014 20:12:57.487619    1963 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 14 20:12:57 ha-579393 kubelet[1963]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 14 20:12:57 ha-579393 kubelet[1963]:  > podSandboxID="26eabc9a05c338cff1ebd4ea1b580692dcb1accc6b0e23f61f6a228d1f73adce"
	Oct 14 20:12:57 ha-579393 kubelet[1963]: E1014 20:12:57.487709    1963 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 14 20:12:57 ha-579393 kubelet[1963]:         container kube-apiserver start failed in pod kube-apiserver-ha-579393_kube-system(06869f9c490a5fffb50940ac23939d18): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 14 20:12:57 ha-579393 kubelet[1963]:  > logger="UnhandledError"
	Oct 14 20:12:57 ha-579393 kubelet[1963]: E1014 20:12:57.488839    1963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-ha-579393" podUID="06869f9c490a5fffb50940ac23939d18"
	Oct 14 20:12:58 ha-579393 kubelet[1963]: E1014 20:12:58.640597    1963 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-579393.186e746019ae0b94  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-579393,UID:ha-579393,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-579393 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-579393,},FirstTimestamp:2025-10-14 20:07:40.444961684 +0000 UTC m=+0.732526825,LastTimestamp:2025-10-14 20:07:40.444961684 +0000 UTC m=+0.732526825,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-579393,}"
	Oct 14 20:13:00 ha-579393 kubelet[1963]: E1014 20:13:00.037350    1963 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://192.168.49.2:8443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError"
	Oct 14 20:13:00 ha-579393 kubelet[1963]: E1014 20:13:00.473717    1963 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-579393\" not found"
	Oct 14 20:13:00 ha-579393 kubelet[1963]: E1014 20:13:00.508888    1963 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://192.168.49.2:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass"
	Oct 14 20:13:01 ha-579393 kubelet[1963]: E1014 20:13:01.091624    1963 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-579393?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 14 20:13:01 ha-579393 kubelet[1963]: I1014 20:13:01.263563    1963 kubelet_node_status.go:75] "Attempting to register node" node="ha-579393"
	Oct 14 20:13:01 ha-579393 kubelet[1963]: E1014 20:13:01.263995    1963 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-579393"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-579393 -n ha-579393
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-579393 -n ha-579393: exit status 6 (302.398113ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1014 20:13:03.462122  477761 status.go:458] kubeconfig endpoint: get endpoint: "ha-579393" does not appear in /home/jenkins/minikube-integration/21409-413763/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "ha-579393" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (1.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (1.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-579393 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-579393 node add --alsologtostderr -v 5: exit status 103 (254.333825ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-579393 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p ha-579393"

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 20:13:03.524418  477877 out.go:360] Setting OutFile to fd 1 ...
	I1014 20:13:03.524580  477877 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:13:03.524590  477877 out.go:374] Setting ErrFile to fd 2...
	I1014 20:13:03.524594  477877 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:13:03.524811  477877 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-413763/.minikube/bin
	I1014 20:13:03.525095  477877 mustload.go:65] Loading cluster: ha-579393
	I1014 20:13:03.525427  477877 config.go:182] Loaded profile config "ha-579393": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:13:03.525813  477877 cli_runner.go:164] Run: docker container inspect ha-579393 --format={{.State.Status}}
	I1014 20:13:03.543540  477877 host.go:66] Checking if "ha-579393" exists ...
	I1014 20:13:03.543853  477877 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 20:13:03.600816  477877 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-14 20:13:03.590838035 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1014 20:13:03.600946  477877 api_server.go:166] Checking apiserver status ...
	I1014 20:13:03.601013  477877 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 20:13:03.601056  477877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:13:03.618368  477877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	W1014 20:13:03.723998  477877 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1014 20:13:03.725929  477877 out.go:179] * The control-plane node ha-579393 apiserver is not running: (state=Stopped)
	I1014 20:13:03.727364  477877 out.go:179]   To start a cluster, run: "minikube start -p ha-579393"

                                                
                                                
** /stderr **
ha_test.go:230: failed to add worker node to current ha (multi-control plane) cluster. args "out/minikube-linux-amd64 -p ha-579393 node add --alsologtostderr -v 5" : exit status 103
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/AddWorkerNode]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/AddWorkerNode]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-579393
helpers_test.go:243: (dbg) docker inspect ha-579393:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2",
	        "Created": "2025-10-14T20:03:22.416166993Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 471115,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-14T20:03:22.453114626Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2/hostname",
	        "HostsPath": "/var/lib/docker/containers/e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2/hosts",
	        "LogPath": "/var/lib/docker/containers/e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2/e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2-json.log",
	        "Name": "/ha-579393",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-579393:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-579393",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2",
	                "LowerDir": "/var/lib/docker/overlay2/f2862f7f555e5163c5fb738e59e646f87e5b6f6dc38621bbd5cf8a3ccf2dc40d-init/diff:/var/lib/docker/overlay2/51cca15559205c73df9571f03495ed8a29f085405673af5fef2c1ba7c695d8af/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f2862f7f555e5163c5fb738e59e646f87e5b6f6dc38621bbd5cf8a3ccf2dc40d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f2862f7f555e5163c5fb738e59e646f87e5b6f6dc38621bbd5cf8a3ccf2dc40d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f2862f7f555e5163c5fb738e59e646f87e5b6f6dc38621bbd5cf8a3ccf2dc40d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-579393",
	                "Source": "/var/lib/docker/volumes/ha-579393/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-579393",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-579393",
	                "name.minikube.sigs.k8s.io": "ha-579393",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a1c2b52fae2ff440ed705eb97dd81ba6bb6415972c195c1ca3bec92d8e7f50f0",
	            "SandboxKey": "/var/run/docker/netns/a1c2b52fae2f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32903"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32904"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32907"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32905"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32906"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-579393": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "92:ce:80:cd:a9:2b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d686e445c44d566d468791372c74214f3cfc87adaf2fbedd68822ff7d3ce8955",
	                    "EndpointID": "9e96e7d478fc0073b7c8e78f8945763db207596a9030627a1780b04c90be2b93",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-579393",
	                        "e999454f4483"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-579393 -n ha-579393
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-579393 -n ha-579393: exit status 6 (303.005929ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1014 20:13:04.038529  477984 status.go:458] kubeconfig endpoint: get endpoint: "ha-579393" does not appear in /home/jenkins/minikube-integration/21409-413763/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/AddWorkerNode FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/AddWorkerNode]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-579393 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/AddWorkerNode logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                      ARGS                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ update-context │ functional-744288 update-context --alsologtostderr -v=2                                                         │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ update-context │ functional-744288 update-context --alsologtostderr -v=2                                                         │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ image          │ functional-744288 image build -t localhost/my-image:functional-744288 testdata/build --alsologtostderr          │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ update-context │ functional-744288 update-context --alsologtostderr -v=2                                                         │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ image          │ functional-744288 image ls                                                                                      │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ delete         │ -p functional-744288                                                                                            │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 20:03 UTC │ 14 Oct 25 20:03 UTC │
	│ start          │ ha-579393 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:03 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml                                                │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- rollout status deployment/busybox                                                          │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:12 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:12 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:12 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- exec  -- nslookup kubernetes.io                                                            │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- exec  -- nslookup kubernetes.default                                                       │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                                     │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ node           │ ha-579393 node add --alsologtostderr -v 5                                                                       │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/14 20:03:17
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1014 20:03:17.125360  470544 out.go:360] Setting OutFile to fd 1 ...
	I1014 20:03:17.125666  470544 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:03:17.125678  470544 out.go:374] Setting ErrFile to fd 2...
	I1014 20:03:17.125685  470544 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:03:17.125940  470544 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-413763/.minikube/bin
	I1014 20:03:17.126490  470544 out.go:368] Setting JSON to false
	I1014 20:03:17.127467  470544 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":9943,"bootTime":1760462254,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1014 20:03:17.127588  470544 start.go:141] virtualization: kvm guest
	I1014 20:03:17.129767  470544 out.go:179] * [ha-579393] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1014 20:03:17.131241  470544 out.go:179]   - MINIKUBE_LOCATION=21409
	I1014 20:03:17.131264  470544 notify.go:220] Checking for updates...
	I1014 20:03:17.134306  470544 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 20:03:17.135806  470544 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-413763/kubeconfig
	I1014 20:03:17.137119  470544 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-413763/.minikube
	I1014 20:03:17.138379  470544 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1014 20:03:17.140082  470544 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 20:03:17.141662  470544 driver.go:421] Setting default libvirt URI to qemu:///system
	I1014 20:03:17.165916  470544 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1014 20:03:17.166098  470544 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 20:03:17.229548  470544 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-14 20:03:17.218250431 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1014 20:03:17.229650  470544 docker.go:318] overlay module found
	I1014 20:03:17.231449  470544 out.go:179] * Using the docker driver based on user configuration
	I1014 20:03:17.232741  470544 start.go:305] selected driver: docker
	I1014 20:03:17.232773  470544 start.go:925] validating driver "docker" against <nil>
	I1014 20:03:17.232790  470544 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 20:03:17.233313  470544 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 20:03:17.295257  470544 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-14 20:03:17.284941769 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1014 20:03:17.295445  470544 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1014 20:03:17.295657  470544 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 20:03:17.297506  470544 out.go:179] * Using Docker driver with root privileges
	I1014 20:03:17.298873  470544 cni.go:84] Creating CNI manager for ""
	I1014 20:03:17.298932  470544 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1014 20:03:17.298947  470544 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1014 20:03:17.299040  470544 start.go:349] cluster config:
	{Name:ha-579393 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-579393 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPaus
eInterval:1m0s}
	I1014 20:03:17.300487  470544 out.go:179] * Starting "ha-579393" primary control-plane node in "ha-579393" cluster
	I1014 20:03:17.301710  470544 cache.go:123] Beginning downloading kic base image for docker with crio
	I1014 20:03:17.302965  470544 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1014 20:03:17.304134  470544 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 20:03:17.304173  470544 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-413763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1014 20:03:17.304183  470544 cache.go:58] Caching tarball of preloaded images
	I1014 20:03:17.304233  470544 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1014 20:03:17.304269  470544 preload.go:233] Found /home/jenkins/minikube-integration/21409-413763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1014 20:03:17.304279  470544 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1014 20:03:17.304557  470544 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/config.json ...
	I1014 20:03:17.304580  470544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/config.json: {Name:mk533f81ade9d1a5f526dccc10d22b964ab1abab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:03:17.326336  470544 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1014 20:03:17.326357  470544 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1014 20:03:17.326374  470544 cache.go:232] Successfully downloaded all kic artifacts
	I1014 20:03:17.326399  470544 start.go:360] acquireMachinesLock for ha-579393: {Name:mk1f05a584fb3418ecbc78031a467c0e06c7a892 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 20:03:17.327173  470544 start.go:364] duration metric: took 757.56µs to acquireMachinesLock for "ha-579393"
	I1014 20:03:17.327207  470544 start.go:93] Provisioning new machine with config: &{Name:ha-579393 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-579393 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 20:03:17.327266  470544 start.go:125] createHost starting for "" (driver="docker")
	I1014 20:03:17.329132  470544 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1014 20:03:17.329332  470544 start.go:159] libmachine.API.Create for "ha-579393" (driver="docker")
	I1014 20:03:17.329358  470544 client.go:168] LocalClient.Create starting
	I1014 20:03:17.329426  470544 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem
	I1014 20:03:17.329458  470544 main.go:141] libmachine: Decoding PEM data...
	I1014 20:03:17.329469  470544 main.go:141] libmachine: Parsing certificate...
	I1014 20:03:17.329531  470544 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem
	I1014 20:03:17.329556  470544 main.go:141] libmachine: Decoding PEM data...
	I1014 20:03:17.329563  470544 main.go:141] libmachine: Parsing certificate...
	I1014 20:03:17.329904  470544 cli_runner.go:164] Run: docker network inspect ha-579393 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1014 20:03:17.347467  470544 cli_runner.go:211] docker network inspect ha-579393 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1014 20:03:17.347535  470544 network_create.go:284] running [docker network inspect ha-579393] to gather additional debugging logs...
	I1014 20:03:17.347555  470544 cli_runner.go:164] Run: docker network inspect ha-579393
	W1014 20:03:17.364018  470544 cli_runner.go:211] docker network inspect ha-579393 returned with exit code 1
	I1014 20:03:17.364049  470544 network_create.go:287] error running [docker network inspect ha-579393]: docker network inspect ha-579393: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-579393 not found
	I1014 20:03:17.364062  470544 network_create.go:289] output of [docker network inspect ha-579393]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-579393 not found
	
	** /stderr **
	I1014 20:03:17.364179  470544 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1014 20:03:17.381335  470544 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001946000}
	I1014 20:03:17.381374  470544 network_create.go:124] attempt to create docker network ha-579393 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1014 20:03:17.381422  470544 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-579393 ha-579393
	I1014 20:03:17.438306  470544 network_create.go:108] docker network ha-579393 192.168.49.0/24 created
	I1014 20:03:17.438342  470544 kic.go:121] calculated static IP "192.168.49.2" for the "ha-579393" container
	I1014 20:03:17.438422  470544 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1014 20:03:17.455388  470544 cli_runner.go:164] Run: docker volume create ha-579393 --label name.minikube.sigs.k8s.io=ha-579393 --label created_by.minikube.sigs.k8s.io=true
	I1014 20:03:17.474494  470544 oci.go:103] Successfully created a docker volume ha-579393
	I1014 20:03:17.474585  470544 cli_runner.go:164] Run: docker run --rm --name ha-579393-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-579393 --entrypoint /usr/bin/test -v ha-579393:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1014 20:03:17.868197  470544 oci.go:107] Successfully prepared a docker volume ha-579393
	I1014 20:03:17.868264  470544 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 20:03:17.868291  470544 kic.go:194] Starting extracting preloaded images to volume ...
	I1014 20:03:17.868380  470544 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21409-413763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-579393:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	I1014 20:03:22.341626  470544 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21409-413763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-579393:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (4.473193247s)
	I1014 20:03:22.341663  470544 kic.go:203] duration metric: took 4.47336734s to extract preloaded images to volume ...
	W1014 20:03:22.341815  470544 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1014 20:03:22.341863  470544 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1014 20:03:22.341916  470544 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1014 20:03:22.400050  470544 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-579393 --name ha-579393 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-579393 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-579393 --network ha-579393 --ip 192.168.49.2 --volume ha-579393:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1014 20:03:22.677726  470544 cli_runner.go:164] Run: docker container inspect ha-579393 --format={{.State.Running}}
	I1014 20:03:22.696026  470544 cli_runner.go:164] Run: docker container inspect ha-579393 --format={{.State.Status}}
	I1014 20:03:22.715378  470544 cli_runner.go:164] Run: docker exec ha-579393 stat /var/lib/dpkg/alternatives/iptables
	I1014 20:03:22.762223  470544 oci.go:144] the created container "ha-579393" has a running status.
	I1014 20:03:22.762255  470544 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa...
	I1014 20:03:22.820780  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1014 20:03:22.820832  470544 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1014 20:03:22.850515  470544 cli_runner.go:164] Run: docker container inspect ha-579393 --format={{.State.Status}}
	I1014 20:03:22.870190  470544 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1014 20:03:22.870210  470544 kic_runner.go:114] Args: [docker exec --privileged ha-579393 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1014 20:03:22.912447  470544 cli_runner.go:164] Run: docker container inspect ha-579393 --format={{.State.Status}}
	I1014 20:03:22.934356  470544 machine.go:93] provisionDockerMachine start ...
	I1014 20:03:22.934472  470544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:03:22.954394  470544 main.go:141] libmachine: Using SSH client type: native
	I1014 20:03:22.954768  470544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32903 <nil> <nil>}
	I1014 20:03:22.954796  470544 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 20:03:22.955439  470544 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50642->127.0.0.1:32903: read: connection reset by peer
	I1014 20:03:26.104260  470544 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-579393
	
	I1014 20:03:26.104298  470544 ubuntu.go:182] provisioning hostname "ha-579393"
	I1014 20:03:26.104379  470544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:03:26.122921  470544 main.go:141] libmachine: Using SSH client type: native
	I1014 20:03:26.123167  470544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32903 <nil> <nil>}
	I1014 20:03:26.123185  470544 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-579393 && echo "ha-579393" | sudo tee /etc/hostname
	I1014 20:03:26.281180  470544 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-579393
	
	I1014 20:03:26.281286  470544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:03:26.299367  470544 main.go:141] libmachine: Using SSH client type: native
	I1014 20:03:26.299579  470544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32903 <nil> <nil>}
	I1014 20:03:26.299596  470544 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-579393' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-579393/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-579393' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 20:03:26.445909  470544 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 20:03:26.445941  470544 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-413763/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-413763/.minikube}
	I1014 20:03:26.445960  470544 ubuntu.go:190] setting up certificates
	I1014 20:03:26.445974  470544 provision.go:84] configureAuth start
	I1014 20:03:26.446042  470544 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-579393
	I1014 20:03:26.463014  470544 provision.go:143] copyHostCerts
	I1014 20:03:26.463059  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem
	I1014 20:03:26.463090  470544 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem, removing ...
	I1014 20:03:26.463099  470544 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem
	I1014 20:03:26.463169  470544 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem (1078 bytes)
	I1014 20:03:26.463255  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem
	I1014 20:03:26.463272  470544 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem, removing ...
	I1014 20:03:26.463279  470544 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem
	I1014 20:03:26.463304  470544 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem (1123 bytes)
	I1014 20:03:26.463350  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem
	I1014 20:03:26.463367  470544 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem, removing ...
	I1014 20:03:26.463373  470544 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem
	I1014 20:03:26.463396  470544 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem (1675 bytes)
	I1014 20:03:26.463447  470544 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca-key.pem org=jenkins.ha-579393 san=[127.0.0.1 192.168.49.2 ha-579393 localhost minikube]
	I1014 20:03:26.617910  470544 provision.go:177] copyRemoteCerts
	I1014 20:03:26.617976  470544 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 20:03:26.618022  470544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:03:26.636120  470544 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:03:26.739380  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1014 20:03:26.739452  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 20:03:26.759232  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1014 20:03:26.759293  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1014 20:03:26.778271  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1014 20:03:26.778338  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1014 20:03:26.796388  470544 provision.go:87] duration metric: took 350.39932ms to configureAuth
	I1014 20:03:26.796420  470544 ubuntu.go:206] setting minikube options for container-runtime
	I1014 20:03:26.796596  470544 config.go:182] Loaded profile config "ha-579393": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:03:26.796705  470544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:03:26.816035  470544 main.go:141] libmachine: Using SSH client type: native
	I1014 20:03:26.816243  470544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32903 <nil> <nil>}
	I1014 20:03:26.816259  470544 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 20:03:27.082126  470544 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 20:03:27.082156  470544 machine.go:96] duration metric: took 4.147772563s to provisionDockerMachine
	I1014 20:03:27.082171  470544 client.go:171] duration metric: took 9.752806403s to LocalClient.Create
	I1014 20:03:27.082197  470544 start.go:167] duration metric: took 9.752866506s to libmachine.API.Create "ha-579393"
	I1014 20:03:27.082205  470544 start.go:293] postStartSetup for "ha-579393" (driver="docker")
	I1014 20:03:27.082215  470544 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 20:03:27.082274  470544 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 20:03:27.082316  470544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:03:27.101460  470544 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:03:27.208078  470544 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 20:03:27.212053  470544 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1014 20:03:27.212086  470544 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1014 20:03:27.212100  470544 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-413763/.minikube/addons for local assets ...
	I1014 20:03:27.212182  470544 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-413763/.minikube/files for local assets ...
	I1014 20:03:27.212277  470544 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem -> 4173732.pem in /etc/ssl/certs
	I1014 20:03:27.212288  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem -> /etc/ssl/certs/4173732.pem
	I1014 20:03:27.212396  470544 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 20:03:27.220472  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem --> /etc/ssl/certs/4173732.pem (1708 bytes)
	I1014 20:03:27.241576  470544 start.go:296] duration metric: took 159.355524ms for postStartSetup
	I1014 20:03:27.241976  470544 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-579393
	I1014 20:03:27.259468  470544 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/config.json ...
	I1014 20:03:27.259849  470544 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1014 20:03:27.259907  470544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:03:27.277799  470544 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:03:27.378323  470544 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1014 20:03:27.383519  470544 start.go:128] duration metric: took 10.056234444s to createHost
	I1014 20:03:27.383548  470544 start.go:83] releasing machines lock for "ha-579393", held for 10.056356237s
	I1014 20:03:27.383629  470544 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-579393
	I1014 20:03:27.401699  470544 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 20:03:27.401709  470544 ssh_runner.go:195] Run: cat /version.json
	I1014 20:03:27.401815  470544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:03:27.401838  470544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:03:27.420176  470544 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:03:27.421057  470544 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:03:27.574708  470544 ssh_runner.go:195] Run: systemctl --version
	I1014 20:03:27.581776  470544 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 20:03:27.618049  470544 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 20:03:27.622981  470544 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 20:03:27.623059  470544 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 20:03:27.650696  470544 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1014 20:03:27.650726  470544 start.go:495] detecting cgroup driver to use...
	I1014 20:03:27.650795  470544 detect.go:190] detected "systemd" cgroup driver on host os
	I1014 20:03:27.650860  470544 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 20:03:27.668397  470544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 20:03:27.681391  470544 docker.go:218] disabling cri-docker service (if available) ...
	I1014 20:03:27.681446  470544 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 20:03:27.698246  470544 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 20:03:27.716479  470544 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 20:03:27.798818  470544 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 20:03:27.884317  470544 docker.go:234] disabling docker service ...
	I1014 20:03:27.884384  470544 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 20:03:27.905126  470544 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 20:03:27.918827  470544 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 20:03:28.002081  470544 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 20:03:28.084842  470544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 20:03:28.098220  470544 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 20:03:28.113305  470544 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1014 20:03:28.113364  470544 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:03:28.124477  470544 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1014 20:03:28.124559  470544 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:03:28.134261  470544 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:03:28.144071  470544 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:03:28.154359  470544 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 20:03:28.163636  470544 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:03:28.173644  470544 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:03:28.188326  470544 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:03:28.198228  470544 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 20:03:28.206234  470544 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 20:03:28.214019  470544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 20:03:28.295010  470544 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 20:03:28.401206  470544 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 20:03:28.401272  470544 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 20:03:28.405522  470544 start.go:563] Will wait 60s for crictl version
	I1014 20:03:28.405585  470544 ssh_runner.go:195] Run: which crictl
	I1014 20:03:28.409373  470544 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1014 20:03:28.435266  470544 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1014 20:03:28.435335  470544 ssh_runner.go:195] Run: crio --version
	I1014 20:03:28.465834  470544 ssh_runner.go:195] Run: crio --version
	I1014 20:03:28.497274  470544 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1014 20:03:28.498593  470544 cli_runner.go:164] Run: docker network inspect ha-579393 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1014 20:03:28.517029  470544 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1014 20:03:28.521498  470544 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 20:03:28.532817  470544 kubeadm.go:883] updating cluster {Name:ha-579393 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-579393 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 20:03:28.532940  470544 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 20:03:28.532992  470544 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 20:03:28.565925  470544 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 20:03:28.565951  470544 crio.go:433] Images already preloaded, skipping extraction
	I1014 20:03:28.566006  470544 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 20:03:28.592978  470544 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 20:03:28.593003  470544 cache_images.go:85] Images are preloaded, skipping loading
	I1014 20:03:28.593011  470544 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1014 20:03:28.593109  470544 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-579393 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-579393 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 20:03:28.593172  470544 ssh_runner.go:195] Run: crio config
	I1014 20:03:28.638570  470544 cni.go:84] Creating CNI manager for ""
	I1014 20:03:28.638590  470544 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1014 20:03:28.638604  470544 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1014 20:03:28.638626  470544 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-579393 NodeName:ha-579393 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1014 20:03:28.638736  470544 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-579393"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 20:03:28.638778  470544 kube-vip.go:115] generating kube-vip config ...
	I1014 20:03:28.638827  470544 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1014 20:03:28.651221  470544 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1014 20:03:28.651322  470544 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1014 20:03:28.651371  470544 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1014 20:03:28.659733  470544 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 20:03:28.659825  470544 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1014 20:03:28.667977  470544 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1014 20:03:28.681172  470544 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 20:03:28.697239  470544 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1014 20:03:28.710080  470544 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I1014 20:03:28.724688  470544 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1014 20:03:28.728568  470544 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 20:03:28.738656  470544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 20:03:28.817749  470544 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 20:03:28.841528  470544 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393 for IP: 192.168.49.2
	I1014 20:03:28.841566  470544 certs.go:195] generating shared ca certs ...
	I1014 20:03:28.841587  470544 certs.go:227] acquiring lock for ca certs: {Name:mk332cc83f1e1747534fc81e896bed94f24941d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:03:28.841727  470544 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.key
	I1014 20:03:28.841805  470544 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.key
	I1014 20:03:28.841821  470544 certs.go:257] generating profile certs ...
	I1014 20:03:28.841874  470544 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/client.key
	I1014 20:03:28.841897  470544 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/client.crt with IP's: []
	I1014 20:03:29.018063  470544 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/client.crt ...
	I1014 20:03:29.018101  470544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/client.crt: {Name:mk8b90bc05b294b6c05e808012d45472c3093f67 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:03:29.018299  470544 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/client.key ...
	I1014 20:03:29.018321  470544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/client.key: {Name:mk4670db425ebf46f3bf4968573343a975480683 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:03:29.018407  470544 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key.6d4a1612
	I1014 20:03:29.018424  470544 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt.6d4a1612 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I1014 20:03:29.208082  470544 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt.6d4a1612 ...
	I1014 20:03:29.208118  470544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt.6d4a1612: {Name:mk2e48e06bd7a0fd2aa3ea9def795ac03bded956 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:03:29.208287  470544 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key.6d4a1612 ...
	I1014 20:03:29.208300  470544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key.6d4a1612: {Name:mkc6fe9b4a3330b4fa61a71beeb137e948294421 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:03:29.209199  470544 certs.go:382] copying /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt.6d4a1612 -> /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt
	I1014 20:03:29.209315  470544 certs.go:386] copying /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key.6d4a1612 -> /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key
	I1014 20:03:29.209373  470544 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.key
	I1014 20:03:29.209389  470544 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.crt with IP's: []
	I1014 20:03:29.349734  470544 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.crt ...
	I1014 20:03:29.349788  470544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.crt: {Name:mk3c38e66fa21f9bf9f031b0b611fbb1d8c4882a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:03:29.349962  470544 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.key ...
	I1014 20:03:29.349973  470544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.key: {Name:mke28e4de33c7a0d50feb0b1335c5cd9e94d81db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:03:29.350047  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1014 20:03:29.350064  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1014 20:03:29.350075  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1014 20:03:29.350087  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1014 20:03:29.350099  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1014 20:03:29.350109  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1014 20:03:29.350122  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1014 20:03:29.350132  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1014 20:03:29.350183  470544 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/417373.pem (1338 bytes)
	W1014 20:03:29.350228  470544 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-413763/.minikube/certs/417373_empty.pem, impossibly tiny 0 bytes
	I1014 20:03:29.350237  470544 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca-key.pem (1675 bytes)
	I1014 20:03:29.350258  470544 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem (1078 bytes)
	I1014 20:03:29.350280  470544 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem (1123 bytes)
	I1014 20:03:29.350300  470544 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem (1675 bytes)
	I1014 20:03:29.350336  470544 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem (1708 bytes)
	I1014 20:03:29.350360  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:03:29.350373  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/417373.pem -> /usr/share/ca-certificates/417373.pem
	I1014 20:03:29.350387  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem -> /usr/share/ca-certificates/4173732.pem
	I1014 20:03:29.350927  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 20:03:29.369482  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1014 20:03:29.386797  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 20:03:29.404413  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1014 20:03:29.421955  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1014 20:03:29.439808  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1014 20:03:29.457222  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 20:03:29.475143  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1014 20:03:29.493300  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 20:03:29.513957  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/certs/417373.pem --> /usr/share/ca-certificates/417373.pem (1338 bytes)
	I1014 20:03:29.535163  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem --> /usr/share/ca-certificates/4173732.pem (1708 bytes)
	I1014 20:03:29.554358  470544 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 20:03:29.567671  470544 ssh_runner.go:195] Run: openssl version
	I1014 20:03:29.574116  470544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4173732.pem && ln -fs /usr/share/ca-certificates/4173732.pem /etc/ssl/certs/4173732.pem"
	I1014 20:03:29.582980  470544 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4173732.pem
	I1014 20:03:29.586713  470544 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 19:32 /usr/share/ca-certificates/4173732.pem
	I1014 20:03:29.586836  470544 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4173732.pem
	I1014 20:03:29.620973  470544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4173732.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 20:03:29.629990  470544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 20:03:29.638580  470544 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:03:29.642541  470544 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 19:15 /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:03:29.642595  470544 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:03:29.677097  470544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 20:03:29.687002  470544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/417373.pem && ln -fs /usr/share/ca-certificates/417373.pem /etc/ssl/certs/417373.pem"
	I1014 20:03:29.696267  470544 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/417373.pem
	I1014 20:03:29.700535  470544 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 19:32 /usr/share/ca-certificates/417373.pem
	I1014 20:03:29.700593  470544 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/417373.pem
	I1014 20:03:29.734895  470544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/417373.pem /etc/ssl/certs/51391683.0"
	I1014 20:03:29.744295  470544 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 20:03:29.748240  470544 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1014 20:03:29.748305  470544 kubeadm.go:400] StartCluster: {Name:ha-579393 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-579393 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 20:03:29.748380  470544 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 20:03:29.748448  470544 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 20:03:29.777054  470544 cri.go:89] found id: ""
	I1014 20:03:29.777134  470544 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 20:03:29.785507  470544 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 20:03:29.793651  470544 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1014 20:03:29.793711  470544 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 20:03:29.801881  470544 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 20:03:29.801906  470544 kubeadm.go:157] found existing configuration files:
	
	I1014 20:03:29.801956  470544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 20:03:29.809948  470544 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 20:03:29.810011  470544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 20:03:29.817979  470544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 20:03:29.825985  470544 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 20:03:29.826064  470544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 20:03:29.833833  470544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 20:03:29.842078  470544 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 20:03:29.842149  470544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 20:03:29.850122  470544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 20:03:29.858250  470544 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 20:03:29.858312  470544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 20:03:29.866004  470544 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1014 20:03:29.905901  470544 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1014 20:03:29.906013  470544 kubeadm.go:318] [preflight] Running pre-flight checks
	I1014 20:03:29.928412  470544 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1014 20:03:29.928498  470544 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1014 20:03:29.928541  470544 kubeadm.go:318] OS: Linux
	I1014 20:03:29.928583  470544 kubeadm.go:318] CGROUPS_CPU: enabled
	I1014 20:03:29.928652  470544 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1014 20:03:29.928730  470544 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1014 20:03:29.928805  470544 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1014 20:03:29.928849  470544 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1014 20:03:29.928892  470544 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1014 20:03:29.928935  470544 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1014 20:03:29.928973  470544 kubeadm.go:318] CGROUPS_IO: enabled
	I1014 20:03:29.989181  470544 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1014 20:03:29.989342  470544 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1014 20:03:29.989457  470544 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1014 20:03:29.997476  470544 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1014 20:03:30.000428  470544 out.go:252]   - Generating certificates and keys ...
	I1014 20:03:30.000531  470544 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1014 20:03:30.000656  470544 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1014 20:03:30.367367  470544 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1014 20:03:30.888441  470544 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1014 20:03:31.416284  470544 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1014 20:03:31.486302  470544 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1014 20:03:32.293304  470544 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1014 20:03:32.293457  470544 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [ha-579393 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1014 20:03:32.436942  470544 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1014 20:03:32.437134  470544 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [ha-579393 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1014 20:03:32.740861  470544 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1014 20:03:32.874202  470544 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1014 20:03:33.330864  470544 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1014 20:03:33.330961  470544 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1014 20:03:33.434687  470544 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1014 20:03:33.590351  470544 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1014 20:03:33.928031  470544 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1014 20:03:34.042691  470544 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1014 20:03:34.576186  470544 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1014 20:03:34.576637  470544 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1014 20:03:34.579016  470544 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1014 20:03:34.581440  470544 out.go:252]   - Booting up control plane ...
	I1014 20:03:34.581593  470544 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1014 20:03:34.581712  470544 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1014 20:03:34.581832  470544 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1014 20:03:34.595204  470544 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1014 20:03:34.595404  470544 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1014 20:03:34.601919  470544 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1014 20:03:34.602142  470544 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1014 20:03:34.602243  470544 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1014 20:03:34.699612  470544 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1014 20:03:34.699737  470544 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1014 20:03:35.200483  470544 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 500.970561ms
	I1014 20:03:35.205501  470544 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1014 20:03:35.205636  470544 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1014 20:03:35.205873  470544 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1014 20:03:35.205987  470544 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1014 20:07:35.206930  470544 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000676337s
	I1014 20:07:35.207172  470544 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000720088s
	I1014 20:07:35.207359  470544 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000743417s
	I1014 20:07:35.207371  470544 kubeadm.go:318] 
	I1014 20:07:35.207694  470544 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1014 20:07:35.208049  470544 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1014 20:07:35.208276  470544 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1014 20:07:35.208532  470544 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1014 20:07:35.208786  470544 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1014 20:07:35.209074  470544 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1014 20:07:35.209100  470544 kubeadm.go:318] 
	I1014 20:07:35.211976  470544 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1014 20:07:35.212145  470544 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1014 20:07:35.212706  470544 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1014 20:07:35.212843  470544 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	W1014 20:07:35.212972  470544 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-579393 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-579393 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 500.970561ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000676337s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000720088s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000743417s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1014 20:07:35.213050  470544 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1014 20:07:37.966951  470544 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.753873381s)
	I1014 20:07:37.967030  470544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 20:07:37.980538  470544 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1014 20:07:37.980613  470544 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 20:07:37.988822  470544 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 20:07:37.988844  470544 kubeadm.go:157] found existing configuration files:
	
	I1014 20:07:37.988897  470544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 20:07:37.996970  470544 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 20:07:37.997051  470544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 20:07:38.004797  470544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 20:07:38.012635  470544 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 20:07:38.012702  470544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 20:07:38.020175  470544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 20:07:38.028386  470544 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 20:07:38.028440  470544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 20:07:38.036154  470544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 20:07:38.044027  470544 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 20:07:38.044088  470544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 20:07:38.051422  470544 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1014 20:07:38.110505  470544 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1014 20:07:38.170186  470544 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1014 20:11:40.721242  470544 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1014 20:11:40.721491  470544 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1014 20:11:40.724650  470544 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1014 20:11:40.724789  470544 kubeadm.go:318] [preflight] Running pre-flight checks
	I1014 20:11:40.724937  470544 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1014 20:11:40.725018  470544 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1014 20:11:40.725068  470544 kubeadm.go:318] OS: Linux
	I1014 20:11:40.725125  470544 kubeadm.go:318] CGROUPS_CPU: enabled
	I1014 20:11:40.725181  470544 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1014 20:11:40.725248  470544 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1014 20:11:40.725310  470544 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1014 20:11:40.725365  470544 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1014 20:11:40.725423  470544 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1014 20:11:40.725473  470544 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1014 20:11:40.725534  470544 kubeadm.go:318] CGROUPS_IO: enabled
	I1014 20:11:40.725639  470544 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1014 20:11:40.725782  470544 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1014 20:11:40.725977  470544 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1014 20:11:40.726087  470544 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1014 20:11:40.728584  470544 out.go:252]   - Generating certificates and keys ...
	I1014 20:11:40.728668  470544 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1014 20:11:40.728723  470544 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1014 20:11:40.728820  470544 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1014 20:11:40.728895  470544 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1014 20:11:40.728974  470544 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1014 20:11:40.729051  470544 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1014 20:11:40.729150  470544 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1014 20:11:40.729214  470544 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1014 20:11:40.729282  470544 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1014 20:11:40.729340  470544 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1014 20:11:40.729378  470544 kubeadm.go:318] [certs] Using the existing "sa" key
	I1014 20:11:40.729422  470544 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1014 20:11:40.729466  470544 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1014 20:11:40.729531  470544 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1014 20:11:40.729604  470544 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1014 20:11:40.729710  470544 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1014 20:11:40.729805  470544 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1014 20:11:40.729913  470544 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1014 20:11:40.730020  470544 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1014 20:11:40.731279  470544 out.go:252]   - Booting up control plane ...
	I1014 20:11:40.731376  470544 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1014 20:11:40.731472  470544 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1014 20:11:40.731563  470544 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1014 20:11:40.731676  470544 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1014 20:11:40.731820  470544 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1014 20:11:40.731960  470544 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1014 20:11:40.732060  470544 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1014 20:11:40.732099  470544 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1014 20:11:40.732241  470544 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1014 20:11:40.732368  470544 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1014 20:11:40.732459  470544 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001170855s
	I1014 20:11:40.732550  470544 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1014 20:11:40.732649  470544 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1014 20:11:40.732789  470544 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1014 20:11:40.732875  470544 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1014 20:11:40.732961  470544 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000981646s
	I1014 20:11:40.733076  470544 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001242346s
	I1014 20:11:40.733142  470544 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001205382s
	I1014 20:11:40.733157  470544 kubeadm.go:318] 
	I1014 20:11:40.733272  470544 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1014 20:11:40.733349  470544 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1014 20:11:40.733417  470544 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1014 20:11:40.733491  470544 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1014 20:11:40.733553  470544 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1014 20:11:40.733641  470544 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1014 20:11:40.733684  470544 kubeadm.go:318] 
	I1014 20:11:40.733748  470544 kubeadm.go:402] duration metric: took 8m10.985445817s to StartCluster
	I1014 20:11:40.733824  470544 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 20:11:40.733881  470544 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 20:11:40.762474  470544 cri.go:89] found id: ""
	I1014 20:11:40.762524  470544 logs.go:282] 0 containers: []
	W1014 20:11:40.762538  470544 logs.go:284] No container was found matching "kube-apiserver"
	I1014 20:11:40.762545  470544 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 20:11:40.762602  470544 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 20:11:40.789961  470544 cri.go:89] found id: ""
	I1014 20:11:40.789989  470544 logs.go:282] 0 containers: []
	W1014 20:11:40.789999  470544 logs.go:284] No container was found matching "etcd"
	I1014 20:11:40.790007  470544 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 20:11:40.790062  470544 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 20:11:40.817095  470544 cri.go:89] found id: ""
	I1014 20:11:40.817128  470544 logs.go:282] 0 containers: []
	W1014 20:11:40.817141  470544 logs.go:284] No container was found matching "coredns"
	I1014 20:11:40.817148  470544 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 20:11:40.817206  470544 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 20:11:40.843942  470544 cri.go:89] found id: ""
	I1014 20:11:40.843974  470544 logs.go:282] 0 containers: []
	W1014 20:11:40.843984  470544 logs.go:284] No container was found matching "kube-scheduler"
	I1014 20:11:40.843991  470544 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 20:11:40.844054  470544 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 20:11:40.870262  470544 cri.go:89] found id: ""
	I1014 20:11:40.870289  470544 logs.go:282] 0 containers: []
	W1014 20:11:40.870299  470544 logs.go:284] No container was found matching "kube-proxy"
	I1014 20:11:40.870308  470544 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 20:11:40.870377  470544 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 20:11:40.896558  470544 cri.go:89] found id: ""
	I1014 20:11:40.896588  470544 logs.go:282] 0 containers: []
	W1014 20:11:40.896597  470544 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 20:11:40.896604  470544 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 20:11:40.896660  470544 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 20:11:40.923171  470544 cri.go:89] found id: ""
	I1014 20:11:40.923202  470544 logs.go:282] 0 containers: []
	W1014 20:11:40.923214  470544 logs.go:284] No container was found matching "kindnet"
	I1014 20:11:40.923225  470544 logs.go:123] Gathering logs for kubelet ...
	I1014 20:11:40.923237  470544 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 20:11:40.991897  470544 logs.go:123] Gathering logs for dmesg ...
	I1014 20:11:40.991944  470544 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 20:11:41.010371  470544 logs.go:123] Gathering logs for describe nodes ...
	I1014 20:11:41.010404  470544 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 20:11:41.071387  470544 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 20:11:41.064404    2582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:11:41.064977    2582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:11:41.066171    2582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:11:41.066648    2582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:11:41.068287    2582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 20:11:41.064404    2582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:11:41.064977    2582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:11:41.066171    2582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:11:41.066648    2582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:11:41.068287    2582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 20:11:41.071407  470544 logs.go:123] Gathering logs for CRI-O ...
	I1014 20:11:41.071419  470544 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 20:11:41.133347  470544 logs.go:123] Gathering logs for container status ...
	I1014 20:11:41.133392  470544 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1014 20:11:41.166639  470544 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001170855s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000981646s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001242346s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001205382s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1014 20:11:41.166697  470544 out.go:285] * 
	W1014 20:11:41.166793  470544 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001170855s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000981646s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001242346s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001205382s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1014 20:11:41.166813  470544 out.go:285] * 
	W1014 20:11:41.168436  470544 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 20:11:41.172303  470544 out.go:203] 
	W1014 20:11:41.173765  470544 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001170855s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000981646s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001242346s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001205382s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1014 20:11:41.173801  470544 out.go:285] * 
	I1014 20:11:41.176311  470544 out.go:203] 
	
	
	==> CRI-O <==
	Oct 14 20:12:55 ha-579393 crio[778]: time="2025-10-14T20:12:55.475028797Z" level=info msg="createCtr: removing container 6a372a51d54dc4a99bd65b1bd218c7ad755d88bd6a7f29ba9f9ec88dfe7464b1" id=9554bca4-6c59-4f80-aeee-73b1126c989d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:12:55 ha-579393 crio[778]: time="2025-10-14T20:12:55.475060215Z" level=info msg="createCtr: deleting container 6a372a51d54dc4a99bd65b1bd218c7ad755d88bd6a7f29ba9f9ec88dfe7464b1 from storage" id=9554bca4-6c59-4f80-aeee-73b1126c989d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:12:55 ha-579393 crio[778]: time="2025-10-14T20:12:55.477290398Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-ha-579393_kube-system_8c15ab9dd5834e64ae44874faddf585d_0" id=9554bca4-6c59-4f80-aeee-73b1126c989d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:12:57 ha-579393 crio[778]: time="2025-10-14T20:12:57.453139497Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=25acb935-6053-4dc4-8a00-c0f31525eed4 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 20:12:57 ha-579393 crio[778]: time="2025-10-14T20:12:57.453177802Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=80fdaac0-913c-4b9e-9748-df734a5fb57f name=/runtime.v1.ImageService/ImageStatus
	Oct 14 20:12:57 ha-579393 crio[778]: time="2025-10-14T20:12:57.453990016Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=81315333-2347-4ad7-a446-80adee999f7b name=/runtime.v1.ImageService/ImageStatus
	Oct 14 20:12:57 ha-579393 crio[778]: time="2025-10-14T20:12:57.454064007Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=2ce9826b-3e65-476d-b952-d7fea30fa2e0 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 20:12:57 ha-579393 crio[778]: time="2025-10-14T20:12:57.454968172Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-579393/kube-controller-manager" id=ba9a3535-5c46-4278-8b81-cf99a5fce5ee name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:12:57 ha-579393 crio[778]: time="2025-10-14T20:12:57.454968174Z" level=info msg="Creating container: kube-system/kube-apiserver-ha-579393/kube-apiserver" id=0e15fee7-fb21-4c78-b7c1-f4cdd6b3ab37 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:12:57 ha-579393 crio[778]: time="2025-10-14T20:12:57.455263729Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:12:57 ha-579393 crio[778]: time="2025-10-14T20:12:57.455263857Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:12:57 ha-579393 crio[778]: time="2025-10-14T20:12:57.460303725Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:12:57 ha-579393 crio[778]: time="2025-10-14T20:12:57.460785139Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:12:57 ha-579393 crio[778]: time="2025-10-14T20:12:57.461766752Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:12:57 ha-579393 crio[778]: time="2025-10-14T20:12:57.462288762Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:12:57 ha-579393 crio[778]: time="2025-10-14T20:12:57.48128067Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=ba9a3535-5c46-4278-8b81-cf99a5fce5ee name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:12:57 ha-579393 crio[778]: time="2025-10-14T20:12:57.48192273Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=0e15fee7-fb21-4c78-b7c1-f4cdd6b3ab37 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:12:57 ha-579393 crio[778]: time="2025-10-14T20:12:57.482937984Z" level=info msg="createCtr: deleting container ID 13267deb92140ae09a619e9f9948fd9715254273819dbe61130eaa9df39a2123 from idIndex" id=ba9a3535-5c46-4278-8b81-cf99a5fce5ee name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:12:57 ha-579393 crio[778]: time="2025-10-14T20:12:57.482979701Z" level=info msg="createCtr: removing container 13267deb92140ae09a619e9f9948fd9715254273819dbe61130eaa9df39a2123" id=ba9a3535-5c46-4278-8b81-cf99a5fce5ee name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:12:57 ha-579393 crio[778]: time="2025-10-14T20:12:57.483024499Z" level=info msg="createCtr: deleting container 13267deb92140ae09a619e9f9948fd9715254273819dbe61130eaa9df39a2123 from storage" id=ba9a3535-5c46-4278-8b81-cf99a5fce5ee name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:12:57 ha-579393 crio[778]: time="2025-10-14T20:12:57.48342172Z" level=info msg="createCtr: deleting container ID 1de87d6b72a75dded6420210657397b777ace5cdc49362907789517ac1f87bd8 from idIndex" id=0e15fee7-fb21-4c78-b7c1-f4cdd6b3ab37 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:12:57 ha-579393 crio[778]: time="2025-10-14T20:12:57.483468415Z" level=info msg="createCtr: removing container 1de87d6b72a75dded6420210657397b777ace5cdc49362907789517ac1f87bd8" id=0e15fee7-fb21-4c78-b7c1-f4cdd6b3ab37 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:12:57 ha-579393 crio[778]: time="2025-10-14T20:12:57.48351067Z" level=info msg="createCtr: deleting container 1de87d6b72a75dded6420210657397b777ace5cdc49362907789517ac1f87bd8 from storage" id=0e15fee7-fb21-4c78-b7c1-f4cdd6b3ab37 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:12:57 ha-579393 crio[778]: time="2025-10-14T20:12:57.487098353Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-579393_kube-system_514451ea1eb9e52e24cc36daace2ea4a_0" id=ba9a3535-5c46-4278-8b81-cf99a5fce5ee name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:12:57 ha-579393 crio[778]: time="2025-10-14T20:12:57.487389405Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-ha-579393_kube-system_06869f9c490a5fffb50940ac23939d18_0" id=0e15fee7-fb21-4c78-b7c1-f4cdd6b3ab37 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 20:13:04.639274    3353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:13:04.639875    3353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:13:04.641468    3353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:13:04.641908    3353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:13:04.643490    3353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 91 42 a8 46 f5 08 06
	[ +33.952077] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff aa 17 60 38 7c 63 08 06
	[  +0.961658] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ba 77 81 98 05 a1 08 06
	[  +0.043334] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 8a 2f 57 55 4c d7 08 06
	[  +4.681081] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f6 ac 51 a4 b2 5a 08 06
	[Oct14 19:04] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e6 d1 de e1 85 25 08 06
	[  +0.899784] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ae f2 fe 51 3d 95 08 06
	[  +0.040749] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 82 c6 36 89 b8 4a 08 06
	[  +4.162603] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c6 62 d5 5c 0e ef 08 06
	[ +30.801371] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 06 ae e5 28 c7 39 08 06
	[  +0.817897] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ae 5e 5e 90 62 1a 08 06
	[  +0.046959] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 96 8f 1b bd b9 9f 08 06
	[Oct14 19:05] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 56 59 c3 7c db c4 08 06
	
	
	==> kernel <==
	 20:13:04 up  2:55,  0 user,  load average: 0.15, 0.08, 0.50
	Linux ha-579393 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 14 20:12:55 ha-579393 kubelet[1963]:  > logger="UnhandledError"
	Oct 14 20:12:55 ha-579393 kubelet[1963]: E1014 20:12:55.477777    1963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-ha-579393" podUID="8c15ab9dd5834e64ae44874faddf585d"
	Oct 14 20:12:57 ha-579393 kubelet[1963]: E1014 20:12:57.452628    1963 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-579393\" not found" node="ha-579393"
	Oct 14 20:12:57 ha-579393 kubelet[1963]: E1014 20:12:57.452751    1963 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-579393\" not found" node="ha-579393"
	Oct 14 20:12:57 ha-579393 kubelet[1963]: E1014 20:12:57.487429    1963 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 14 20:12:57 ha-579393 kubelet[1963]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 14 20:12:57 ha-579393 kubelet[1963]:  > podSandboxID="aaede030549f8967d5aa233537563148ce2bbd3af1fde92787bd937fe5f1c93d"
	Oct 14 20:12:57 ha-579393 kubelet[1963]: E1014 20:12:57.487530    1963 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 14 20:12:57 ha-579393 kubelet[1963]:         container kube-controller-manager start failed in pod kube-controller-manager-ha-579393_kube-system(514451ea1eb9e52e24cc36daace2ea4a): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 14 20:12:57 ha-579393 kubelet[1963]:  > logger="UnhandledError"
	Oct 14 20:12:57 ha-579393 kubelet[1963]: E1014 20:12:57.487567    1963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-579393" podUID="514451ea1eb9e52e24cc36daace2ea4a"
	Oct 14 20:12:57 ha-579393 kubelet[1963]: E1014 20:12:57.487619    1963 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 14 20:12:57 ha-579393 kubelet[1963]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 14 20:12:57 ha-579393 kubelet[1963]:  > podSandboxID="26eabc9a05c338cff1ebd4ea1b580692dcb1accc6b0e23f61f6a228d1f73adce"
	Oct 14 20:12:57 ha-579393 kubelet[1963]: E1014 20:12:57.487709    1963 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 14 20:12:57 ha-579393 kubelet[1963]:         container kube-apiserver start failed in pod kube-apiserver-ha-579393_kube-system(06869f9c490a5fffb50940ac23939d18): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 14 20:12:57 ha-579393 kubelet[1963]:  > logger="UnhandledError"
	Oct 14 20:12:57 ha-579393 kubelet[1963]: E1014 20:12:57.488839    1963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-ha-579393" podUID="06869f9c490a5fffb50940ac23939d18"
	Oct 14 20:12:58 ha-579393 kubelet[1963]: E1014 20:12:58.640597    1963 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-579393.186e746019ae0b94  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-579393,UID:ha-579393,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-579393 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-579393,},FirstTimestamp:2025-10-14 20:07:40.444961684 +0000 UTC m=+0.732526825,LastTimestamp:2025-10-14 20:07:40.444961684 +0000 UTC m=+0.732526825,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-579393,}"
	Oct 14 20:13:00 ha-579393 kubelet[1963]: E1014 20:13:00.037350    1963 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://192.168.49.2:8443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError"
	Oct 14 20:13:00 ha-579393 kubelet[1963]: E1014 20:13:00.473717    1963 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-579393\" not found"
	Oct 14 20:13:00 ha-579393 kubelet[1963]: E1014 20:13:00.508888    1963 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://192.168.49.2:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass"
	Oct 14 20:13:01 ha-579393 kubelet[1963]: E1014 20:13:01.091624    1963 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-579393?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 14 20:13:01 ha-579393 kubelet[1963]: I1014 20:13:01.263563    1963 kubelet_node_status.go:75] "Attempting to register node" node="ha-579393"
	Oct 14 20:13:01 ha-579393 kubelet[1963]: E1014 20:13:01.263995    1963 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-579393"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-579393 -n ha-579393
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-579393 -n ha-579393: exit status 6 (305.528916ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1014 20:13:05.028377  478317 status.go:458] kubeconfig endpoint: get endpoint: "ha-579393" does not appear in /home/jenkins/minikube-integration/21409-413763/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "ha-579393" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddWorkerNode (1.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (1.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-579393 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
ha_test.go:255: (dbg) Non-zero exit: kubectl --context ha-579393 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (48.342687ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: ha-579393

                                                
                                                
** /stderr **
ha_test.go:257: failed to 'kubectl get nodes' with args "kubectl --context ha-579393 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
ha_test.go:264: failed to decode json from label list: args "kubectl --context ha-579393 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/NodeLabels]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/NodeLabels]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-579393
helpers_test.go:243: (dbg) docker inspect ha-579393:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2",
	        "Created": "2025-10-14T20:03:22.416166993Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 471115,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-14T20:03:22.453114626Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2/hostname",
	        "HostsPath": "/var/lib/docker/containers/e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2/hosts",
	        "LogPath": "/var/lib/docker/containers/e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2/e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2-json.log",
	        "Name": "/ha-579393",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-579393:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-579393",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2",
	                "LowerDir": "/var/lib/docker/overlay2/f2862f7f555e5163c5fb738e59e646f87e5b6f6dc38621bbd5cf8a3ccf2dc40d-init/diff:/var/lib/docker/overlay2/51cca15559205c73df9571f03495ed8a29f085405673af5fef2c1ba7c695d8af/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f2862f7f555e5163c5fb738e59e646f87e5b6f6dc38621bbd5cf8a3ccf2dc40d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f2862f7f555e5163c5fb738e59e646f87e5b6f6dc38621bbd5cf8a3ccf2dc40d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f2862f7f555e5163c5fb738e59e646f87e5b6f6dc38621bbd5cf8a3ccf2dc40d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-579393",
	                "Source": "/var/lib/docker/volumes/ha-579393/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-579393",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-579393",
	                "name.minikube.sigs.k8s.io": "ha-579393",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a1c2b52fae2ff440ed705eb97dd81ba6bb6415972c195c1ca3bec92d8e7f50f0",
	            "SandboxKey": "/var/run/docker/netns/a1c2b52fae2f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32903"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32904"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32907"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32905"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32906"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-579393": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "92:ce:80:cd:a9:2b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d686e445c44d566d468791372c74214f3cfc87adaf2fbedd68822ff7d3ce8955",
	                    "EndpointID": "9e96e7d478fc0073b7c8e78f8945763db207596a9030627a1780b04c90be2b93",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-579393",
	                        "e999454f4483"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-579393 -n ha-579393
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-579393 -n ha-579393: exit status 6 (303.486421ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1014 20:13:05.400258  478449 status.go:458] kubeconfig endpoint: get endpoint: "ha-579393" does not appear in /home/jenkins/minikube-integration/21409-413763/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/NodeLabels FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/NodeLabels]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-579393 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/NodeLabels logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                      ARGS                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ update-context │ functional-744288 update-context --alsologtostderr -v=2                                                         │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ update-context │ functional-744288 update-context --alsologtostderr -v=2                                                         │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ image          │ functional-744288 image build -t localhost/my-image:functional-744288 testdata/build --alsologtostderr          │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ update-context │ functional-744288 update-context --alsologtostderr -v=2                                                         │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ image          │ functional-744288 image ls                                                                                      │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ delete         │ -p functional-744288                                                                                            │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 20:03 UTC │ 14 Oct 25 20:03 UTC │
	│ start          │ ha-579393 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:03 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml                                                │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- rollout status deployment/busybox                                                          │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:12 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:12 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:12 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- exec  -- nslookup kubernetes.io                                                            │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- exec  -- nslookup kubernetes.default                                                       │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                                     │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ node           │ ha-579393 node add --alsologtostderr -v 5                                                                       │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/14 20:03:17
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1014 20:03:17.125360  470544 out.go:360] Setting OutFile to fd 1 ...
	I1014 20:03:17.125666  470544 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:03:17.125678  470544 out.go:374] Setting ErrFile to fd 2...
	I1014 20:03:17.125685  470544 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:03:17.125940  470544 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-413763/.minikube/bin
	I1014 20:03:17.126490  470544 out.go:368] Setting JSON to false
	I1014 20:03:17.127467  470544 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":9943,"bootTime":1760462254,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1014 20:03:17.127588  470544 start.go:141] virtualization: kvm guest
	I1014 20:03:17.129767  470544 out.go:179] * [ha-579393] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1014 20:03:17.131241  470544 out.go:179]   - MINIKUBE_LOCATION=21409
	I1014 20:03:17.131264  470544 notify.go:220] Checking for updates...
	I1014 20:03:17.134306  470544 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 20:03:17.135806  470544 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-413763/kubeconfig
	I1014 20:03:17.137119  470544 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-413763/.minikube
	I1014 20:03:17.138379  470544 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1014 20:03:17.140082  470544 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 20:03:17.141662  470544 driver.go:421] Setting default libvirt URI to qemu:///system
	I1014 20:03:17.165916  470544 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1014 20:03:17.166098  470544 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 20:03:17.229548  470544 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-14 20:03:17.218250431 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1014 20:03:17.229650  470544 docker.go:318] overlay module found
	I1014 20:03:17.231449  470544 out.go:179] * Using the docker driver based on user configuration
	I1014 20:03:17.232741  470544 start.go:305] selected driver: docker
	I1014 20:03:17.232773  470544 start.go:925] validating driver "docker" against <nil>
	I1014 20:03:17.232790  470544 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 20:03:17.233313  470544 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 20:03:17.295257  470544 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-14 20:03:17.284941769 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1014 20:03:17.295445  470544 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1014 20:03:17.295657  470544 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 20:03:17.297506  470544 out.go:179] * Using Docker driver with root privileges
	I1014 20:03:17.298873  470544 cni.go:84] Creating CNI manager for ""
	I1014 20:03:17.298932  470544 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1014 20:03:17.298947  470544 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1014 20:03:17.299040  470544 start.go:349] cluster config:
	{Name:ha-579393 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-579393 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPaus
eInterval:1m0s}
	I1014 20:03:17.300487  470544 out.go:179] * Starting "ha-579393" primary control-plane node in "ha-579393" cluster
	I1014 20:03:17.301710  470544 cache.go:123] Beginning downloading kic base image for docker with crio
	I1014 20:03:17.302965  470544 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1014 20:03:17.304134  470544 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 20:03:17.304173  470544 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-413763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1014 20:03:17.304183  470544 cache.go:58] Caching tarball of preloaded images
	I1014 20:03:17.304233  470544 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1014 20:03:17.304269  470544 preload.go:233] Found /home/jenkins/minikube-integration/21409-413763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1014 20:03:17.304279  470544 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1014 20:03:17.304557  470544 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/config.json ...
	I1014 20:03:17.304580  470544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/config.json: {Name:mk533f81ade9d1a5f526dccc10d22b964ab1abab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:03:17.326336  470544 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1014 20:03:17.326357  470544 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1014 20:03:17.326374  470544 cache.go:232] Successfully downloaded all kic artifacts
	I1014 20:03:17.326399  470544 start.go:360] acquireMachinesLock for ha-579393: {Name:mk1f05a584fb3418ecbc78031a467c0e06c7a892 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 20:03:17.327173  470544 start.go:364] duration metric: took 757.56µs to acquireMachinesLock for "ha-579393"
	I1014 20:03:17.327207  470544 start.go:93] Provisioning new machine with config: &{Name:ha-579393 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-579393 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 20:03:17.327266  470544 start.go:125] createHost starting for "" (driver="docker")
	I1014 20:03:17.329132  470544 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1014 20:03:17.329332  470544 start.go:159] libmachine.API.Create for "ha-579393" (driver="docker")
	I1014 20:03:17.329358  470544 client.go:168] LocalClient.Create starting
	I1014 20:03:17.329426  470544 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem
	I1014 20:03:17.329458  470544 main.go:141] libmachine: Decoding PEM data...
	I1014 20:03:17.329469  470544 main.go:141] libmachine: Parsing certificate...
	I1014 20:03:17.329531  470544 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem
	I1014 20:03:17.329556  470544 main.go:141] libmachine: Decoding PEM data...
	I1014 20:03:17.329563  470544 main.go:141] libmachine: Parsing certificate...
	I1014 20:03:17.329904  470544 cli_runner.go:164] Run: docker network inspect ha-579393 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1014 20:03:17.347467  470544 cli_runner.go:211] docker network inspect ha-579393 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1014 20:03:17.347535  470544 network_create.go:284] running [docker network inspect ha-579393] to gather additional debugging logs...
	I1014 20:03:17.347555  470544 cli_runner.go:164] Run: docker network inspect ha-579393
	W1014 20:03:17.364018  470544 cli_runner.go:211] docker network inspect ha-579393 returned with exit code 1
	I1014 20:03:17.364049  470544 network_create.go:287] error running [docker network inspect ha-579393]: docker network inspect ha-579393: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-579393 not found
	I1014 20:03:17.364062  470544 network_create.go:289] output of [docker network inspect ha-579393]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-579393 not found
	
	** /stderr **
	I1014 20:03:17.364179  470544 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1014 20:03:17.381335  470544 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001946000}
	I1014 20:03:17.381374  470544 network_create.go:124] attempt to create docker network ha-579393 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1014 20:03:17.381422  470544 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-579393 ha-579393
	I1014 20:03:17.438306  470544 network_create.go:108] docker network ha-579393 192.168.49.0/24 created
	I1014 20:03:17.438342  470544 kic.go:121] calculated static IP "192.168.49.2" for the "ha-579393" container
	I1014 20:03:17.438422  470544 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1014 20:03:17.455388  470544 cli_runner.go:164] Run: docker volume create ha-579393 --label name.minikube.sigs.k8s.io=ha-579393 --label created_by.minikube.sigs.k8s.io=true
	I1014 20:03:17.474494  470544 oci.go:103] Successfully created a docker volume ha-579393
	I1014 20:03:17.474585  470544 cli_runner.go:164] Run: docker run --rm --name ha-579393-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-579393 --entrypoint /usr/bin/test -v ha-579393:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1014 20:03:17.868197  470544 oci.go:107] Successfully prepared a docker volume ha-579393
	I1014 20:03:17.868264  470544 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 20:03:17.868291  470544 kic.go:194] Starting extracting preloaded images to volume ...
	I1014 20:03:17.868380  470544 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21409-413763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-579393:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	I1014 20:03:22.341626  470544 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21409-413763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-579393:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (4.473193247s)
	I1014 20:03:22.341663  470544 kic.go:203] duration metric: took 4.47336734s to extract preloaded images to volume ...
	W1014 20:03:22.341815  470544 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1014 20:03:22.341863  470544 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1014 20:03:22.341916  470544 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1014 20:03:22.400050  470544 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-579393 --name ha-579393 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-579393 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-579393 --network ha-579393 --ip 192.168.49.2 --volume ha-579393:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1014 20:03:22.677726  470544 cli_runner.go:164] Run: docker container inspect ha-579393 --format={{.State.Running}}
	I1014 20:03:22.696026  470544 cli_runner.go:164] Run: docker container inspect ha-579393 --format={{.State.Status}}
	I1014 20:03:22.715378  470544 cli_runner.go:164] Run: docker exec ha-579393 stat /var/lib/dpkg/alternatives/iptables
	I1014 20:03:22.762223  470544 oci.go:144] the created container "ha-579393" has a running status.
	I1014 20:03:22.762255  470544 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa...
	I1014 20:03:22.820780  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1014 20:03:22.820832  470544 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1014 20:03:22.850515  470544 cli_runner.go:164] Run: docker container inspect ha-579393 --format={{.State.Status}}
	I1014 20:03:22.870190  470544 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1014 20:03:22.870210  470544 kic_runner.go:114] Args: [docker exec --privileged ha-579393 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1014 20:03:22.912447  470544 cli_runner.go:164] Run: docker container inspect ha-579393 --format={{.State.Status}}
	I1014 20:03:22.934356  470544 machine.go:93] provisionDockerMachine start ...
	I1014 20:03:22.934472  470544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:03:22.954394  470544 main.go:141] libmachine: Using SSH client type: native
	I1014 20:03:22.954768  470544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32903 <nil> <nil>}
	I1014 20:03:22.954796  470544 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 20:03:22.955439  470544 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50642->127.0.0.1:32903: read: connection reset by peer
	I1014 20:03:26.104260  470544 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-579393
	
	I1014 20:03:26.104298  470544 ubuntu.go:182] provisioning hostname "ha-579393"
	I1014 20:03:26.104379  470544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:03:26.122921  470544 main.go:141] libmachine: Using SSH client type: native
	I1014 20:03:26.123167  470544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32903 <nil> <nil>}
	I1014 20:03:26.123185  470544 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-579393 && echo "ha-579393" | sudo tee /etc/hostname
	I1014 20:03:26.281180  470544 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-579393
	
	I1014 20:03:26.281286  470544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:03:26.299367  470544 main.go:141] libmachine: Using SSH client type: native
	I1014 20:03:26.299579  470544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32903 <nil> <nil>}
	I1014 20:03:26.299596  470544 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-579393' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-579393/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-579393' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 20:03:26.445909  470544 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 20:03:26.445941  470544 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-413763/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-413763/.minikube}
	I1014 20:03:26.445960  470544 ubuntu.go:190] setting up certificates
	I1014 20:03:26.445974  470544 provision.go:84] configureAuth start
	I1014 20:03:26.446042  470544 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-579393
	I1014 20:03:26.463014  470544 provision.go:143] copyHostCerts
	I1014 20:03:26.463059  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem
	I1014 20:03:26.463090  470544 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem, removing ...
	I1014 20:03:26.463099  470544 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem
	I1014 20:03:26.463169  470544 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem (1078 bytes)
	I1014 20:03:26.463255  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem
	I1014 20:03:26.463272  470544 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem, removing ...
	I1014 20:03:26.463279  470544 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem
	I1014 20:03:26.463304  470544 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem (1123 bytes)
	I1014 20:03:26.463350  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem
	I1014 20:03:26.463367  470544 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem, removing ...
	I1014 20:03:26.463373  470544 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem
	I1014 20:03:26.463396  470544 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem (1675 bytes)
	I1014 20:03:26.463447  470544 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca-key.pem org=jenkins.ha-579393 san=[127.0.0.1 192.168.49.2 ha-579393 localhost minikube]
	I1014 20:03:26.617910  470544 provision.go:177] copyRemoteCerts
	I1014 20:03:26.617976  470544 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 20:03:26.618022  470544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:03:26.636120  470544 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:03:26.739380  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1014 20:03:26.739452  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 20:03:26.759232  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1014 20:03:26.759293  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1014 20:03:26.778271  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1014 20:03:26.778338  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1014 20:03:26.796388  470544 provision.go:87] duration metric: took 350.39932ms to configureAuth
	I1014 20:03:26.796420  470544 ubuntu.go:206] setting minikube options for container-runtime
	I1014 20:03:26.796596  470544 config.go:182] Loaded profile config "ha-579393": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:03:26.796705  470544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:03:26.816035  470544 main.go:141] libmachine: Using SSH client type: native
	I1014 20:03:26.816243  470544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32903 <nil> <nil>}
	I1014 20:03:26.816259  470544 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 20:03:27.082126  470544 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 20:03:27.082156  470544 machine.go:96] duration metric: took 4.147772563s to provisionDockerMachine
	I1014 20:03:27.082171  470544 client.go:171] duration metric: took 9.752806403s to LocalClient.Create
	I1014 20:03:27.082197  470544 start.go:167] duration metric: took 9.752866506s to libmachine.API.Create "ha-579393"
	I1014 20:03:27.082205  470544 start.go:293] postStartSetup for "ha-579393" (driver="docker")
	I1014 20:03:27.082215  470544 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 20:03:27.082274  470544 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 20:03:27.082316  470544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:03:27.101460  470544 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:03:27.208078  470544 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 20:03:27.212053  470544 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1014 20:03:27.212086  470544 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1014 20:03:27.212100  470544 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-413763/.minikube/addons for local assets ...
	I1014 20:03:27.212182  470544 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-413763/.minikube/files for local assets ...
	I1014 20:03:27.212277  470544 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem -> 4173732.pem in /etc/ssl/certs
	I1014 20:03:27.212288  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem -> /etc/ssl/certs/4173732.pem
	I1014 20:03:27.212396  470544 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 20:03:27.220472  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem --> /etc/ssl/certs/4173732.pem (1708 bytes)
	I1014 20:03:27.241576  470544 start.go:296] duration metric: took 159.355524ms for postStartSetup
	I1014 20:03:27.241976  470544 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-579393
	I1014 20:03:27.259468  470544 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/config.json ...
	I1014 20:03:27.259849  470544 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1014 20:03:27.259907  470544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:03:27.277799  470544 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:03:27.378323  470544 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1014 20:03:27.383519  470544 start.go:128] duration metric: took 10.056234444s to createHost
	I1014 20:03:27.383548  470544 start.go:83] releasing machines lock for "ha-579393", held for 10.056356237s
	I1014 20:03:27.383629  470544 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-579393
	I1014 20:03:27.401699  470544 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 20:03:27.401709  470544 ssh_runner.go:195] Run: cat /version.json
	I1014 20:03:27.401815  470544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:03:27.401838  470544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:03:27.420176  470544 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:03:27.421057  470544 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:03:27.574708  470544 ssh_runner.go:195] Run: systemctl --version
	I1014 20:03:27.581776  470544 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 20:03:27.618049  470544 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 20:03:27.622981  470544 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 20:03:27.623059  470544 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 20:03:27.650696  470544 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1014 20:03:27.650726  470544 start.go:495] detecting cgroup driver to use...
	I1014 20:03:27.650795  470544 detect.go:190] detected "systemd" cgroup driver on host os
	I1014 20:03:27.650860  470544 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 20:03:27.668397  470544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 20:03:27.681391  470544 docker.go:218] disabling cri-docker service (if available) ...
	I1014 20:03:27.681446  470544 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 20:03:27.698246  470544 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 20:03:27.716479  470544 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 20:03:27.798818  470544 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 20:03:27.884317  470544 docker.go:234] disabling docker service ...
	I1014 20:03:27.884384  470544 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 20:03:27.905126  470544 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 20:03:27.918827  470544 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 20:03:28.002081  470544 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 20:03:28.084842  470544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 20:03:28.098220  470544 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 20:03:28.113305  470544 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1014 20:03:28.113364  470544 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:03:28.124477  470544 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1014 20:03:28.124559  470544 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:03:28.134261  470544 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:03:28.144071  470544 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:03:28.154359  470544 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 20:03:28.163636  470544 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:03:28.173644  470544 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:03:28.188326  470544 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:03:28.198228  470544 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 20:03:28.206234  470544 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 20:03:28.214019  470544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 20:03:28.295010  470544 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 20:03:28.401206  470544 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 20:03:28.401272  470544 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 20:03:28.405522  470544 start.go:563] Will wait 60s for crictl version
	I1014 20:03:28.405585  470544 ssh_runner.go:195] Run: which crictl
	I1014 20:03:28.409373  470544 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1014 20:03:28.435266  470544 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1014 20:03:28.435335  470544 ssh_runner.go:195] Run: crio --version
	I1014 20:03:28.465834  470544 ssh_runner.go:195] Run: crio --version
	I1014 20:03:28.497274  470544 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1014 20:03:28.498593  470544 cli_runner.go:164] Run: docker network inspect ha-579393 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1014 20:03:28.517029  470544 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1014 20:03:28.521498  470544 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 20:03:28.532817  470544 kubeadm.go:883] updating cluster {Name:ha-579393 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-579393 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 20:03:28.532940  470544 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 20:03:28.532992  470544 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 20:03:28.565925  470544 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 20:03:28.565951  470544 crio.go:433] Images already preloaded, skipping extraction
	I1014 20:03:28.566006  470544 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 20:03:28.592978  470544 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 20:03:28.593003  470544 cache_images.go:85] Images are preloaded, skipping loading
	I1014 20:03:28.593011  470544 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1014 20:03:28.593109  470544 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-579393 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-579393 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 20:03:28.593172  470544 ssh_runner.go:195] Run: crio config
	I1014 20:03:28.638570  470544 cni.go:84] Creating CNI manager for ""
	I1014 20:03:28.638590  470544 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1014 20:03:28.638604  470544 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1014 20:03:28.638626  470544 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-579393 NodeName:ha-579393 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1014 20:03:28.638736  470544 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-579393"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 20:03:28.638778  470544 kube-vip.go:115] generating kube-vip config ...
	I1014 20:03:28.638827  470544 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1014 20:03:28.651221  470544 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1014 20:03:28.651322  470544 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1014 20:03:28.651371  470544 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1014 20:03:28.659733  470544 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 20:03:28.659825  470544 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1014 20:03:28.667977  470544 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1014 20:03:28.681172  470544 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 20:03:28.697239  470544 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1014 20:03:28.710080  470544 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I1014 20:03:28.724688  470544 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1014 20:03:28.728568  470544 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 20:03:28.738656  470544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 20:03:28.817749  470544 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 20:03:28.841528  470544 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393 for IP: 192.168.49.2
	I1014 20:03:28.841566  470544 certs.go:195] generating shared ca certs ...
	I1014 20:03:28.841587  470544 certs.go:227] acquiring lock for ca certs: {Name:mk332cc83f1e1747534fc81e896bed94f24941d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:03:28.841727  470544 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.key
	I1014 20:03:28.841805  470544 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.key
	I1014 20:03:28.841821  470544 certs.go:257] generating profile certs ...
	I1014 20:03:28.841874  470544 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/client.key
	I1014 20:03:28.841897  470544 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/client.crt with IP's: []
	I1014 20:03:29.018063  470544 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/client.crt ...
	I1014 20:03:29.018101  470544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/client.crt: {Name:mk8b90bc05b294b6c05e808012d45472c3093f67 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:03:29.018299  470544 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/client.key ...
	I1014 20:03:29.018321  470544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/client.key: {Name:mk4670db425ebf46f3bf4968573343a975480683 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:03:29.018407  470544 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key.6d4a1612
	I1014 20:03:29.018424  470544 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt.6d4a1612 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I1014 20:03:29.208082  470544 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt.6d4a1612 ...
	I1014 20:03:29.208118  470544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt.6d4a1612: {Name:mk2e48e06bd7a0fd2aa3ea9def795ac03bded956 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:03:29.208287  470544 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key.6d4a1612 ...
	I1014 20:03:29.208300  470544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key.6d4a1612: {Name:mkc6fe9b4a3330b4fa61a71beeb137e948294421 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:03:29.209199  470544 certs.go:382] copying /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt.6d4a1612 -> /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt
	I1014 20:03:29.209315  470544 certs.go:386] copying /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key.6d4a1612 -> /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key
	I1014 20:03:29.209373  470544 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.key
	I1014 20:03:29.209389  470544 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.crt with IP's: []
	I1014 20:03:29.349734  470544 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.crt ...
	I1014 20:03:29.349788  470544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.crt: {Name:mk3c38e66fa21f9bf9f031b0b611fbb1d8c4882a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:03:29.349962  470544 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.key ...
	I1014 20:03:29.349973  470544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.key: {Name:mke28e4de33c7a0d50feb0b1335c5cd9e94d81db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:03:29.350047  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1014 20:03:29.350064  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1014 20:03:29.350075  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1014 20:03:29.350087  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1014 20:03:29.350099  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1014 20:03:29.350109  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1014 20:03:29.350122  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1014 20:03:29.350132  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1014 20:03:29.350183  470544 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/417373.pem (1338 bytes)
	W1014 20:03:29.350228  470544 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-413763/.minikube/certs/417373_empty.pem, impossibly tiny 0 bytes
	I1014 20:03:29.350237  470544 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca-key.pem (1675 bytes)
	I1014 20:03:29.350258  470544 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem (1078 bytes)
	I1014 20:03:29.350280  470544 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem (1123 bytes)
	I1014 20:03:29.350300  470544 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem (1675 bytes)
	I1014 20:03:29.350336  470544 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem (1708 bytes)
	I1014 20:03:29.350360  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:03:29.350373  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/417373.pem -> /usr/share/ca-certificates/417373.pem
	I1014 20:03:29.350387  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem -> /usr/share/ca-certificates/4173732.pem
	I1014 20:03:29.350927  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 20:03:29.369482  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1014 20:03:29.386797  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 20:03:29.404413  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1014 20:03:29.421955  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1014 20:03:29.439808  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1014 20:03:29.457222  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 20:03:29.475143  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1014 20:03:29.493300  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 20:03:29.513957  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/certs/417373.pem --> /usr/share/ca-certificates/417373.pem (1338 bytes)
	I1014 20:03:29.535163  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem --> /usr/share/ca-certificates/4173732.pem (1708 bytes)
	I1014 20:03:29.554358  470544 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 20:03:29.567671  470544 ssh_runner.go:195] Run: openssl version
	I1014 20:03:29.574116  470544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4173732.pem && ln -fs /usr/share/ca-certificates/4173732.pem /etc/ssl/certs/4173732.pem"
	I1014 20:03:29.582980  470544 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4173732.pem
	I1014 20:03:29.586713  470544 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 19:32 /usr/share/ca-certificates/4173732.pem
	I1014 20:03:29.586836  470544 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4173732.pem
	I1014 20:03:29.620973  470544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4173732.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 20:03:29.629990  470544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 20:03:29.638580  470544 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:03:29.642541  470544 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 19:15 /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:03:29.642595  470544 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:03:29.677097  470544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 20:03:29.687002  470544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/417373.pem && ln -fs /usr/share/ca-certificates/417373.pem /etc/ssl/certs/417373.pem"
	I1014 20:03:29.696267  470544 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/417373.pem
	I1014 20:03:29.700535  470544 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 19:32 /usr/share/ca-certificates/417373.pem
	I1014 20:03:29.700593  470544 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/417373.pem
	I1014 20:03:29.734895  470544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/417373.pem /etc/ssl/certs/51391683.0"
	I1014 20:03:29.744295  470544 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 20:03:29.748240  470544 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1014 20:03:29.748305  470544 kubeadm.go:400] StartCluster: {Name:ha-579393 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-579393 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 20:03:29.748380  470544 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 20:03:29.748448  470544 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 20:03:29.777054  470544 cri.go:89] found id: ""
	I1014 20:03:29.777134  470544 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 20:03:29.785507  470544 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 20:03:29.793651  470544 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1014 20:03:29.793711  470544 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 20:03:29.801881  470544 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 20:03:29.801906  470544 kubeadm.go:157] found existing configuration files:
	
	I1014 20:03:29.801956  470544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 20:03:29.809948  470544 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 20:03:29.810011  470544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 20:03:29.817979  470544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 20:03:29.825985  470544 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 20:03:29.826064  470544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 20:03:29.833833  470544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 20:03:29.842078  470544 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 20:03:29.842149  470544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 20:03:29.850122  470544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 20:03:29.858250  470544 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 20:03:29.858312  470544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 20:03:29.866004  470544 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1014 20:03:29.905901  470544 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1014 20:03:29.906013  470544 kubeadm.go:318] [preflight] Running pre-flight checks
	I1014 20:03:29.928412  470544 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1014 20:03:29.928498  470544 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1014 20:03:29.928541  470544 kubeadm.go:318] OS: Linux
	I1014 20:03:29.928583  470544 kubeadm.go:318] CGROUPS_CPU: enabled
	I1014 20:03:29.928652  470544 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1014 20:03:29.928730  470544 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1014 20:03:29.928805  470544 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1014 20:03:29.928849  470544 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1014 20:03:29.928892  470544 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1014 20:03:29.928935  470544 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1014 20:03:29.928973  470544 kubeadm.go:318] CGROUPS_IO: enabled
	I1014 20:03:29.989181  470544 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1014 20:03:29.989342  470544 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1014 20:03:29.989457  470544 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1014 20:03:29.997476  470544 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1014 20:03:30.000428  470544 out.go:252]   - Generating certificates and keys ...
	I1014 20:03:30.000531  470544 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1014 20:03:30.000656  470544 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1014 20:03:30.367367  470544 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1014 20:03:30.888441  470544 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1014 20:03:31.416284  470544 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1014 20:03:31.486302  470544 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1014 20:03:32.293304  470544 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1014 20:03:32.293457  470544 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [ha-579393 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1014 20:03:32.436942  470544 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1014 20:03:32.437134  470544 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [ha-579393 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1014 20:03:32.740861  470544 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1014 20:03:32.874202  470544 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1014 20:03:33.330864  470544 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1014 20:03:33.330961  470544 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1014 20:03:33.434687  470544 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1014 20:03:33.590351  470544 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1014 20:03:33.928031  470544 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1014 20:03:34.042691  470544 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1014 20:03:34.576186  470544 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1014 20:03:34.576637  470544 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1014 20:03:34.579016  470544 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1014 20:03:34.581440  470544 out.go:252]   - Booting up control plane ...
	I1014 20:03:34.581593  470544 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1014 20:03:34.581712  470544 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1014 20:03:34.581832  470544 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1014 20:03:34.595204  470544 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1014 20:03:34.595404  470544 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1014 20:03:34.601919  470544 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1014 20:03:34.602142  470544 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1014 20:03:34.602243  470544 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1014 20:03:34.699612  470544 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1014 20:03:34.699737  470544 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1014 20:03:35.200483  470544 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 500.970561ms
	I1014 20:03:35.205501  470544 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1014 20:03:35.205636  470544 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1014 20:03:35.205873  470544 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1014 20:03:35.205987  470544 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1014 20:07:35.206930  470544 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000676337s
	I1014 20:07:35.207172  470544 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000720088s
	I1014 20:07:35.207359  470544 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000743417s
	I1014 20:07:35.207371  470544 kubeadm.go:318] 
	I1014 20:07:35.207694  470544 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1014 20:07:35.208049  470544 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1014 20:07:35.208276  470544 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1014 20:07:35.208532  470544 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1014 20:07:35.208786  470544 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1014 20:07:35.209074  470544 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1014 20:07:35.209100  470544 kubeadm.go:318] 
	I1014 20:07:35.211976  470544 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1014 20:07:35.212145  470544 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1014 20:07:35.212706  470544 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1014 20:07:35.212843  470544 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	W1014 20:07:35.212972  470544 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-579393 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-579393 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 500.970561ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000676337s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000720088s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000743417s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1014 20:07:35.213050  470544 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1014 20:07:37.966951  470544 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.753873381s)
	I1014 20:07:37.967030  470544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 20:07:37.980538  470544 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1014 20:07:37.980613  470544 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 20:07:37.988822  470544 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 20:07:37.988844  470544 kubeadm.go:157] found existing configuration files:
	
	I1014 20:07:37.988897  470544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 20:07:37.996970  470544 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 20:07:37.997051  470544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 20:07:38.004797  470544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 20:07:38.012635  470544 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 20:07:38.012702  470544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 20:07:38.020175  470544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 20:07:38.028386  470544 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 20:07:38.028440  470544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 20:07:38.036154  470544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 20:07:38.044027  470544 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 20:07:38.044088  470544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 20:07:38.051422  470544 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1014 20:07:38.110505  470544 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1014 20:07:38.170186  470544 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1014 20:11:40.721242  470544 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1014 20:11:40.721491  470544 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1014 20:11:40.724650  470544 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1014 20:11:40.724789  470544 kubeadm.go:318] [preflight] Running pre-flight checks
	I1014 20:11:40.724937  470544 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1014 20:11:40.725018  470544 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1014 20:11:40.725068  470544 kubeadm.go:318] OS: Linux
	I1014 20:11:40.725125  470544 kubeadm.go:318] CGROUPS_CPU: enabled
	I1014 20:11:40.725181  470544 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1014 20:11:40.725248  470544 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1014 20:11:40.725310  470544 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1014 20:11:40.725365  470544 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1014 20:11:40.725423  470544 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1014 20:11:40.725473  470544 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1014 20:11:40.725534  470544 kubeadm.go:318] CGROUPS_IO: enabled
	I1014 20:11:40.725639  470544 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1014 20:11:40.725782  470544 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1014 20:11:40.725977  470544 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1014 20:11:40.726087  470544 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1014 20:11:40.728584  470544 out.go:252]   - Generating certificates and keys ...
	I1014 20:11:40.728668  470544 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1014 20:11:40.728723  470544 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1014 20:11:40.728820  470544 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1014 20:11:40.728895  470544 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1014 20:11:40.728974  470544 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1014 20:11:40.729051  470544 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1014 20:11:40.729150  470544 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1014 20:11:40.729214  470544 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1014 20:11:40.729282  470544 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1014 20:11:40.729340  470544 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1014 20:11:40.729378  470544 kubeadm.go:318] [certs] Using the existing "sa" key
	I1014 20:11:40.729422  470544 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1014 20:11:40.729466  470544 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1014 20:11:40.729531  470544 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1014 20:11:40.729604  470544 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1014 20:11:40.729710  470544 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1014 20:11:40.729805  470544 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1014 20:11:40.729913  470544 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1014 20:11:40.730020  470544 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1014 20:11:40.731279  470544 out.go:252]   - Booting up control plane ...
	I1014 20:11:40.731376  470544 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1014 20:11:40.731472  470544 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1014 20:11:40.731563  470544 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1014 20:11:40.731676  470544 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1014 20:11:40.731820  470544 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1014 20:11:40.731960  470544 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1014 20:11:40.732060  470544 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1014 20:11:40.732099  470544 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1014 20:11:40.732241  470544 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1014 20:11:40.732368  470544 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1014 20:11:40.732459  470544 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001170855s
	I1014 20:11:40.732550  470544 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1014 20:11:40.732649  470544 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1014 20:11:40.732789  470544 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1014 20:11:40.732875  470544 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1014 20:11:40.732961  470544 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000981646s
	I1014 20:11:40.733076  470544 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001242346s
	I1014 20:11:40.733142  470544 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001205382s
	I1014 20:11:40.733157  470544 kubeadm.go:318] 
	I1014 20:11:40.733272  470544 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1014 20:11:40.733349  470544 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1014 20:11:40.733417  470544 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1014 20:11:40.733491  470544 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1014 20:11:40.733553  470544 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1014 20:11:40.733641  470544 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1014 20:11:40.733684  470544 kubeadm.go:318] 
	I1014 20:11:40.733748  470544 kubeadm.go:402] duration metric: took 8m10.985445817s to StartCluster
	I1014 20:11:40.733824  470544 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 20:11:40.733881  470544 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 20:11:40.762474  470544 cri.go:89] found id: ""
	I1014 20:11:40.762524  470544 logs.go:282] 0 containers: []
	W1014 20:11:40.762538  470544 logs.go:284] No container was found matching "kube-apiserver"
	I1014 20:11:40.762545  470544 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 20:11:40.762602  470544 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 20:11:40.789961  470544 cri.go:89] found id: ""
	I1014 20:11:40.789989  470544 logs.go:282] 0 containers: []
	W1014 20:11:40.789999  470544 logs.go:284] No container was found matching "etcd"
	I1014 20:11:40.790007  470544 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 20:11:40.790062  470544 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 20:11:40.817095  470544 cri.go:89] found id: ""
	I1014 20:11:40.817128  470544 logs.go:282] 0 containers: []
	W1014 20:11:40.817141  470544 logs.go:284] No container was found matching "coredns"
	I1014 20:11:40.817148  470544 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 20:11:40.817206  470544 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 20:11:40.843942  470544 cri.go:89] found id: ""
	I1014 20:11:40.843974  470544 logs.go:282] 0 containers: []
	W1014 20:11:40.843984  470544 logs.go:284] No container was found matching "kube-scheduler"
	I1014 20:11:40.843991  470544 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 20:11:40.844054  470544 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 20:11:40.870262  470544 cri.go:89] found id: ""
	I1014 20:11:40.870289  470544 logs.go:282] 0 containers: []
	W1014 20:11:40.870299  470544 logs.go:284] No container was found matching "kube-proxy"
	I1014 20:11:40.870308  470544 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 20:11:40.870377  470544 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 20:11:40.896558  470544 cri.go:89] found id: ""
	I1014 20:11:40.896588  470544 logs.go:282] 0 containers: []
	W1014 20:11:40.896597  470544 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 20:11:40.896604  470544 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 20:11:40.896660  470544 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 20:11:40.923171  470544 cri.go:89] found id: ""
	I1014 20:11:40.923202  470544 logs.go:282] 0 containers: []
	W1014 20:11:40.923214  470544 logs.go:284] No container was found matching "kindnet"
	I1014 20:11:40.923225  470544 logs.go:123] Gathering logs for kubelet ...
	I1014 20:11:40.923237  470544 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 20:11:40.991897  470544 logs.go:123] Gathering logs for dmesg ...
	I1014 20:11:40.991944  470544 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 20:11:41.010371  470544 logs.go:123] Gathering logs for describe nodes ...
	I1014 20:11:41.010404  470544 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 20:11:41.071387  470544 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 20:11:41.064404    2582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:11:41.064977    2582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:11:41.066171    2582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:11:41.066648    2582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:11:41.068287    2582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 20:11:41.064404    2582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:11:41.064977    2582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:11:41.066171    2582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:11:41.066648    2582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:11:41.068287    2582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 20:11:41.071407  470544 logs.go:123] Gathering logs for CRI-O ...
	I1014 20:11:41.071419  470544 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 20:11:41.133347  470544 logs.go:123] Gathering logs for container status ...
	I1014 20:11:41.133392  470544 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1014 20:11:41.166639  470544 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001170855s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000981646s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001242346s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001205382s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1014 20:11:41.166697  470544 out.go:285] * 
	W1014 20:11:41.166793  470544 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001170855s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000981646s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001242346s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001205382s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1014 20:11:41.166813  470544 out.go:285] * 
	W1014 20:11:41.168436  470544 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 20:11:41.172303  470544 out.go:203] 
	W1014 20:11:41.173765  470544 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001170855s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000981646s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001242346s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001205382s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1014 20:11:41.173801  470544 out.go:285] * 
	I1014 20:11:41.176311  470544 out.go:203] 
	
	
	==> CRI-O <==
	Oct 14 20:12:55 ha-579393 crio[778]: time="2025-10-14T20:12:55.475028797Z" level=info msg="createCtr: removing container 6a372a51d54dc4a99bd65b1bd218c7ad755d88bd6a7f29ba9f9ec88dfe7464b1" id=9554bca4-6c59-4f80-aeee-73b1126c989d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:12:55 ha-579393 crio[778]: time="2025-10-14T20:12:55.475060215Z" level=info msg="createCtr: deleting container 6a372a51d54dc4a99bd65b1bd218c7ad755d88bd6a7f29ba9f9ec88dfe7464b1 from storage" id=9554bca4-6c59-4f80-aeee-73b1126c989d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:12:55 ha-579393 crio[778]: time="2025-10-14T20:12:55.477290398Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-ha-579393_kube-system_8c15ab9dd5834e64ae44874faddf585d_0" id=9554bca4-6c59-4f80-aeee-73b1126c989d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:12:57 ha-579393 crio[778]: time="2025-10-14T20:12:57.453139497Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=25acb935-6053-4dc4-8a00-c0f31525eed4 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 20:12:57 ha-579393 crio[778]: time="2025-10-14T20:12:57.453177802Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=80fdaac0-913c-4b9e-9748-df734a5fb57f name=/runtime.v1.ImageService/ImageStatus
	Oct 14 20:12:57 ha-579393 crio[778]: time="2025-10-14T20:12:57.453990016Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=81315333-2347-4ad7-a446-80adee999f7b name=/runtime.v1.ImageService/ImageStatus
	Oct 14 20:12:57 ha-579393 crio[778]: time="2025-10-14T20:12:57.454064007Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=2ce9826b-3e65-476d-b952-d7fea30fa2e0 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 20:12:57 ha-579393 crio[778]: time="2025-10-14T20:12:57.454968172Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-579393/kube-controller-manager" id=ba9a3535-5c46-4278-8b81-cf99a5fce5ee name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:12:57 ha-579393 crio[778]: time="2025-10-14T20:12:57.454968174Z" level=info msg="Creating container: kube-system/kube-apiserver-ha-579393/kube-apiserver" id=0e15fee7-fb21-4c78-b7c1-f4cdd6b3ab37 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:12:57 ha-579393 crio[778]: time="2025-10-14T20:12:57.455263729Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:12:57 ha-579393 crio[778]: time="2025-10-14T20:12:57.455263857Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:12:57 ha-579393 crio[778]: time="2025-10-14T20:12:57.460303725Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:12:57 ha-579393 crio[778]: time="2025-10-14T20:12:57.460785139Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:12:57 ha-579393 crio[778]: time="2025-10-14T20:12:57.461766752Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:12:57 ha-579393 crio[778]: time="2025-10-14T20:12:57.462288762Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:12:57 ha-579393 crio[778]: time="2025-10-14T20:12:57.48128067Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=ba9a3535-5c46-4278-8b81-cf99a5fce5ee name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:12:57 ha-579393 crio[778]: time="2025-10-14T20:12:57.48192273Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=0e15fee7-fb21-4c78-b7c1-f4cdd6b3ab37 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:12:57 ha-579393 crio[778]: time="2025-10-14T20:12:57.482937984Z" level=info msg="createCtr: deleting container ID 13267deb92140ae09a619e9f9948fd9715254273819dbe61130eaa9df39a2123 from idIndex" id=ba9a3535-5c46-4278-8b81-cf99a5fce5ee name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:12:57 ha-579393 crio[778]: time="2025-10-14T20:12:57.482979701Z" level=info msg="createCtr: removing container 13267deb92140ae09a619e9f9948fd9715254273819dbe61130eaa9df39a2123" id=ba9a3535-5c46-4278-8b81-cf99a5fce5ee name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:12:57 ha-579393 crio[778]: time="2025-10-14T20:12:57.483024499Z" level=info msg="createCtr: deleting container 13267deb92140ae09a619e9f9948fd9715254273819dbe61130eaa9df39a2123 from storage" id=ba9a3535-5c46-4278-8b81-cf99a5fce5ee name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:12:57 ha-579393 crio[778]: time="2025-10-14T20:12:57.48342172Z" level=info msg="createCtr: deleting container ID 1de87d6b72a75dded6420210657397b777ace5cdc49362907789517ac1f87bd8 from idIndex" id=0e15fee7-fb21-4c78-b7c1-f4cdd6b3ab37 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:12:57 ha-579393 crio[778]: time="2025-10-14T20:12:57.483468415Z" level=info msg="createCtr: removing container 1de87d6b72a75dded6420210657397b777ace5cdc49362907789517ac1f87bd8" id=0e15fee7-fb21-4c78-b7c1-f4cdd6b3ab37 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:12:57 ha-579393 crio[778]: time="2025-10-14T20:12:57.48351067Z" level=info msg="createCtr: deleting container 1de87d6b72a75dded6420210657397b777ace5cdc49362907789517ac1f87bd8 from storage" id=0e15fee7-fb21-4c78-b7c1-f4cdd6b3ab37 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:12:57 ha-579393 crio[778]: time="2025-10-14T20:12:57.487098353Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-579393_kube-system_514451ea1eb9e52e24cc36daace2ea4a_0" id=ba9a3535-5c46-4278-8b81-cf99a5fce5ee name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:12:57 ha-579393 crio[778]: time="2025-10-14T20:12:57.487389405Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-ha-579393_kube-system_06869f9c490a5fffb50940ac23939d18_0" id=0e15fee7-fb21-4c78-b7c1-f4cdd6b3ab37 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 20:13:06.009080    3511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:13:06.010029    3511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:13:06.011596    3511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:13:06.012099    3511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:13:06.013460    3511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 91 42 a8 46 f5 08 06
	[ +33.952077] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff aa 17 60 38 7c 63 08 06
	[  +0.961658] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ba 77 81 98 05 a1 08 06
	[  +0.043334] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 8a 2f 57 55 4c d7 08 06
	[  +4.681081] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f6 ac 51 a4 b2 5a 08 06
	[Oct14 19:04] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e6 d1 de e1 85 25 08 06
	[  +0.899784] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ae f2 fe 51 3d 95 08 06
	[  +0.040749] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 82 c6 36 89 b8 4a 08 06
	[  +4.162603] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c6 62 d5 5c 0e ef 08 06
	[ +30.801371] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 06 ae e5 28 c7 39 08 06
	[  +0.817897] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ae 5e 5e 90 62 1a 08 06
	[  +0.046959] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 96 8f 1b bd b9 9f 08 06
	[Oct14 19:05] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 56 59 c3 7c db c4 08 06
	
	
	==> kernel <==
	 20:13:06 up  2:55,  0 user,  load average: 0.15, 0.08, 0.50
	Linux ha-579393 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 14 20:12:55 ha-579393 kubelet[1963]:  > logger="UnhandledError"
	Oct 14 20:12:55 ha-579393 kubelet[1963]: E1014 20:12:55.477777    1963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-ha-579393" podUID="8c15ab9dd5834e64ae44874faddf585d"
	Oct 14 20:12:57 ha-579393 kubelet[1963]: E1014 20:12:57.452628    1963 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-579393\" not found" node="ha-579393"
	Oct 14 20:12:57 ha-579393 kubelet[1963]: E1014 20:12:57.452751    1963 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-579393\" not found" node="ha-579393"
	Oct 14 20:12:57 ha-579393 kubelet[1963]: E1014 20:12:57.487429    1963 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 14 20:12:57 ha-579393 kubelet[1963]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 14 20:12:57 ha-579393 kubelet[1963]:  > podSandboxID="aaede030549f8967d5aa233537563148ce2bbd3af1fde92787bd937fe5f1c93d"
	Oct 14 20:12:57 ha-579393 kubelet[1963]: E1014 20:12:57.487530    1963 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 14 20:12:57 ha-579393 kubelet[1963]:         container kube-controller-manager start failed in pod kube-controller-manager-ha-579393_kube-system(514451ea1eb9e52e24cc36daace2ea4a): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 14 20:12:57 ha-579393 kubelet[1963]:  > logger="UnhandledError"
	Oct 14 20:12:57 ha-579393 kubelet[1963]: E1014 20:12:57.487567    1963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-579393" podUID="514451ea1eb9e52e24cc36daace2ea4a"
	Oct 14 20:12:57 ha-579393 kubelet[1963]: E1014 20:12:57.487619    1963 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 14 20:12:57 ha-579393 kubelet[1963]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 14 20:12:57 ha-579393 kubelet[1963]:  > podSandboxID="26eabc9a05c338cff1ebd4ea1b580692dcb1accc6b0e23f61f6a228d1f73adce"
	Oct 14 20:12:57 ha-579393 kubelet[1963]: E1014 20:12:57.487709    1963 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 14 20:12:57 ha-579393 kubelet[1963]:         container kube-apiserver start failed in pod kube-apiserver-ha-579393_kube-system(06869f9c490a5fffb50940ac23939d18): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 14 20:12:57 ha-579393 kubelet[1963]:  > logger="UnhandledError"
	Oct 14 20:12:57 ha-579393 kubelet[1963]: E1014 20:12:57.488839    1963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-ha-579393" podUID="06869f9c490a5fffb50940ac23939d18"
	Oct 14 20:12:58 ha-579393 kubelet[1963]: E1014 20:12:58.640597    1963 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-579393.186e746019ae0b94  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-579393,UID:ha-579393,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-579393 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-579393,},FirstTimestamp:2025-10-14 20:07:40.444961684 +0000 UTC m=+0.732526825,LastTimestamp:2025-10-14 20:07:40.444961684 +0000 UTC m=+0.732526825,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-579393,}"
	Oct 14 20:13:00 ha-579393 kubelet[1963]: E1014 20:13:00.037350    1963 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://192.168.49.2:8443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError"
	Oct 14 20:13:00 ha-579393 kubelet[1963]: E1014 20:13:00.473717    1963 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-579393\" not found"
	Oct 14 20:13:00 ha-579393 kubelet[1963]: E1014 20:13:00.508888    1963 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://192.168.49.2:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass"
	Oct 14 20:13:01 ha-579393 kubelet[1963]: E1014 20:13:01.091624    1963 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-579393?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 14 20:13:01 ha-579393 kubelet[1963]: I1014 20:13:01.263563    1963 kubelet_node_status.go:75] "Attempting to register node" node="ha-579393"
	Oct 14 20:13:01 ha-579393 kubelet[1963]: E1014 20:13:01.263995    1963 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-579393"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-579393 -n ha-579393
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-579393 -n ha-579393: exit status 6 (308.88584ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1014 20:13:06.406057  478782 status.go:458] kubeconfig endpoint: get endpoint: "ha-579393" does not appear in /home/jenkins/minikube-integration/21409-413763/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "ha-579393" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/NodeLabels (1.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:305: expected profile "ha-579393" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-579393\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-579393\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nf
sshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-579393\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonIm
ages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
ha_test.go:309: expected profile "ha-579393" in json of 'profile list' to have "HAppy" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-579393\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-579393\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSShar
esRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-579393\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\
"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-linux-amd64 profile list --
output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterClusterStart]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterClusterStart]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-579393
helpers_test.go:243: (dbg) docker inspect ha-579393:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2",
	        "Created": "2025-10-14T20:03:22.416166993Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 471115,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-14T20:03:22.453114626Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2/hostname",
	        "HostsPath": "/var/lib/docker/containers/e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2/hosts",
	        "LogPath": "/var/lib/docker/containers/e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2/e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2-json.log",
	        "Name": "/ha-579393",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-579393:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-579393",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2",
	                "LowerDir": "/var/lib/docker/overlay2/f2862f7f555e5163c5fb738e59e646f87e5b6f6dc38621bbd5cf8a3ccf2dc40d-init/diff:/var/lib/docker/overlay2/51cca15559205c73df9571f03495ed8a29f085405673af5fef2c1ba7c695d8af/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f2862f7f555e5163c5fb738e59e646f87e5b6f6dc38621bbd5cf8a3ccf2dc40d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f2862f7f555e5163c5fb738e59e646f87e5b6f6dc38621bbd5cf8a3ccf2dc40d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f2862f7f555e5163c5fb738e59e646f87e5b6f6dc38621bbd5cf8a3ccf2dc40d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-579393",
	                "Source": "/var/lib/docker/volumes/ha-579393/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-579393",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-579393",
	                "name.minikube.sigs.k8s.io": "ha-579393",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a1c2b52fae2ff440ed705eb97dd81ba6bb6415972c195c1ca3bec92d8e7f50f0",
	            "SandboxKey": "/var/run/docker/netns/a1c2b52fae2f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32903"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32904"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32907"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32905"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32906"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-579393": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "92:ce:80:cd:a9:2b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d686e445c44d566d468791372c74214f3cfc87adaf2fbedd68822ff7d3ce8955",
	                    "EndpointID": "9e96e7d478fc0073b7c8e78f8945763db207596a9030627a1780b04c90be2b93",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-579393",
	                        "e999454f4483"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-579393 -n ha-579393
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-579393 -n ha-579393: exit status 6 (303.74212ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1014 20:13:07.059460  479056 status.go:458] kubeconfig endpoint: get endpoint: "ha-579393" does not appear in /home/jenkins/minikube-integration/21409-413763/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/HAppyAfterClusterStart FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterClusterStart]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-579393 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/HAppyAfterClusterStart logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                      ARGS                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ update-context │ functional-744288 update-context --alsologtostderr -v=2                                                         │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ update-context │ functional-744288 update-context --alsologtostderr -v=2                                                         │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ image          │ functional-744288 image build -t localhost/my-image:functional-744288 testdata/build --alsologtostderr          │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ update-context │ functional-744288 update-context --alsologtostderr -v=2                                                         │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ image          │ functional-744288 image ls                                                                                      │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ delete         │ -p functional-744288                                                                                            │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 20:03 UTC │ 14 Oct 25 20:03 UTC │
	│ start          │ ha-579393 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:03 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml                                                │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- rollout status deployment/busybox                                                          │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:12 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:12 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:12 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- exec  -- nslookup kubernetes.io                                                            │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- exec  -- nslookup kubernetes.default                                                       │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                                     │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ node           │ ha-579393 node add --alsologtostderr -v 5                                                                       │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/14 20:03:17
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1014 20:03:17.125360  470544 out.go:360] Setting OutFile to fd 1 ...
	I1014 20:03:17.125666  470544 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:03:17.125678  470544 out.go:374] Setting ErrFile to fd 2...
	I1014 20:03:17.125685  470544 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:03:17.125940  470544 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-413763/.minikube/bin
	I1014 20:03:17.126490  470544 out.go:368] Setting JSON to false
	I1014 20:03:17.127467  470544 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":9943,"bootTime":1760462254,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1014 20:03:17.127588  470544 start.go:141] virtualization: kvm guest
	I1014 20:03:17.129767  470544 out.go:179] * [ha-579393] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1014 20:03:17.131241  470544 out.go:179]   - MINIKUBE_LOCATION=21409
	I1014 20:03:17.131264  470544 notify.go:220] Checking for updates...
	I1014 20:03:17.134306  470544 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 20:03:17.135806  470544 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-413763/kubeconfig
	I1014 20:03:17.137119  470544 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-413763/.minikube
	I1014 20:03:17.138379  470544 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1014 20:03:17.140082  470544 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 20:03:17.141662  470544 driver.go:421] Setting default libvirt URI to qemu:///system
	I1014 20:03:17.165916  470544 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1014 20:03:17.166098  470544 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 20:03:17.229548  470544 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-14 20:03:17.218250431 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1014 20:03:17.229650  470544 docker.go:318] overlay module found
	I1014 20:03:17.231449  470544 out.go:179] * Using the docker driver based on user configuration
	I1014 20:03:17.232741  470544 start.go:305] selected driver: docker
	I1014 20:03:17.232773  470544 start.go:925] validating driver "docker" against <nil>
	I1014 20:03:17.232790  470544 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 20:03:17.233313  470544 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 20:03:17.295257  470544 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-14 20:03:17.284941769 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1014 20:03:17.295445  470544 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1014 20:03:17.295657  470544 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 20:03:17.297506  470544 out.go:179] * Using Docker driver with root privileges
	I1014 20:03:17.298873  470544 cni.go:84] Creating CNI manager for ""
	I1014 20:03:17.298932  470544 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1014 20:03:17.298947  470544 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1014 20:03:17.299040  470544 start.go:349] cluster config:
	{Name:ha-579393 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-579393 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPaus
eInterval:1m0s}
	I1014 20:03:17.300487  470544 out.go:179] * Starting "ha-579393" primary control-plane node in "ha-579393" cluster
	I1014 20:03:17.301710  470544 cache.go:123] Beginning downloading kic base image for docker with crio
	I1014 20:03:17.302965  470544 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1014 20:03:17.304134  470544 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 20:03:17.304173  470544 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-413763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1014 20:03:17.304183  470544 cache.go:58] Caching tarball of preloaded images
	I1014 20:03:17.304233  470544 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1014 20:03:17.304269  470544 preload.go:233] Found /home/jenkins/minikube-integration/21409-413763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1014 20:03:17.304279  470544 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1014 20:03:17.304557  470544 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/config.json ...
	I1014 20:03:17.304580  470544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/config.json: {Name:mk533f81ade9d1a5f526dccc10d22b964ab1abab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:03:17.326336  470544 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1014 20:03:17.326357  470544 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1014 20:03:17.326374  470544 cache.go:232] Successfully downloaded all kic artifacts
	I1014 20:03:17.326399  470544 start.go:360] acquireMachinesLock for ha-579393: {Name:mk1f05a584fb3418ecbc78031a467c0e06c7a892 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 20:03:17.327173  470544 start.go:364] duration metric: took 757.56µs to acquireMachinesLock for "ha-579393"
	I1014 20:03:17.327207  470544 start.go:93] Provisioning new machine with config: &{Name:ha-579393 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-579393 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 20:03:17.327266  470544 start.go:125] createHost starting for "" (driver="docker")
	I1014 20:03:17.329132  470544 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1014 20:03:17.329332  470544 start.go:159] libmachine.API.Create for "ha-579393" (driver="docker")
	I1014 20:03:17.329358  470544 client.go:168] LocalClient.Create starting
	I1014 20:03:17.329426  470544 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem
	I1014 20:03:17.329458  470544 main.go:141] libmachine: Decoding PEM data...
	I1014 20:03:17.329469  470544 main.go:141] libmachine: Parsing certificate...
	I1014 20:03:17.329531  470544 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem
	I1014 20:03:17.329556  470544 main.go:141] libmachine: Decoding PEM data...
	I1014 20:03:17.329563  470544 main.go:141] libmachine: Parsing certificate...
	I1014 20:03:17.329904  470544 cli_runner.go:164] Run: docker network inspect ha-579393 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1014 20:03:17.347467  470544 cli_runner.go:211] docker network inspect ha-579393 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1014 20:03:17.347535  470544 network_create.go:284] running [docker network inspect ha-579393] to gather additional debugging logs...
	I1014 20:03:17.347555  470544 cli_runner.go:164] Run: docker network inspect ha-579393
	W1014 20:03:17.364018  470544 cli_runner.go:211] docker network inspect ha-579393 returned with exit code 1
	I1014 20:03:17.364049  470544 network_create.go:287] error running [docker network inspect ha-579393]: docker network inspect ha-579393: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-579393 not found
	I1014 20:03:17.364062  470544 network_create.go:289] output of [docker network inspect ha-579393]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-579393 not found
	
	** /stderr **
	I1014 20:03:17.364179  470544 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1014 20:03:17.381335  470544 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001946000}
	I1014 20:03:17.381374  470544 network_create.go:124] attempt to create docker network ha-579393 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1014 20:03:17.381422  470544 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-579393 ha-579393
	I1014 20:03:17.438306  470544 network_create.go:108] docker network ha-579393 192.168.49.0/24 created
	I1014 20:03:17.438342  470544 kic.go:121] calculated static IP "192.168.49.2" for the "ha-579393" container
	I1014 20:03:17.438422  470544 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1014 20:03:17.455388  470544 cli_runner.go:164] Run: docker volume create ha-579393 --label name.minikube.sigs.k8s.io=ha-579393 --label created_by.minikube.sigs.k8s.io=true
	I1014 20:03:17.474494  470544 oci.go:103] Successfully created a docker volume ha-579393
	I1014 20:03:17.474585  470544 cli_runner.go:164] Run: docker run --rm --name ha-579393-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-579393 --entrypoint /usr/bin/test -v ha-579393:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1014 20:03:17.868197  470544 oci.go:107] Successfully prepared a docker volume ha-579393
	I1014 20:03:17.868264  470544 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 20:03:17.868291  470544 kic.go:194] Starting extracting preloaded images to volume ...
	I1014 20:03:17.868380  470544 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21409-413763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-579393:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	I1014 20:03:22.341626  470544 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21409-413763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-579393:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (4.473193247s)
	I1014 20:03:22.341663  470544 kic.go:203] duration metric: took 4.47336734s to extract preloaded images to volume ...
	W1014 20:03:22.341815  470544 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1014 20:03:22.341863  470544 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1014 20:03:22.341916  470544 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1014 20:03:22.400050  470544 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-579393 --name ha-579393 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-579393 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-579393 --network ha-579393 --ip 192.168.49.2 --volume ha-579393:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1014 20:03:22.677726  470544 cli_runner.go:164] Run: docker container inspect ha-579393 --format={{.State.Running}}
	I1014 20:03:22.696026  470544 cli_runner.go:164] Run: docker container inspect ha-579393 --format={{.State.Status}}
	I1014 20:03:22.715378  470544 cli_runner.go:164] Run: docker exec ha-579393 stat /var/lib/dpkg/alternatives/iptables
	I1014 20:03:22.762223  470544 oci.go:144] the created container "ha-579393" has a running status.
	I1014 20:03:22.762255  470544 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa...
	I1014 20:03:22.820780  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1014 20:03:22.820832  470544 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1014 20:03:22.850515  470544 cli_runner.go:164] Run: docker container inspect ha-579393 --format={{.State.Status}}
	I1014 20:03:22.870190  470544 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1014 20:03:22.870210  470544 kic_runner.go:114] Args: [docker exec --privileged ha-579393 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1014 20:03:22.912447  470544 cli_runner.go:164] Run: docker container inspect ha-579393 --format={{.State.Status}}
	I1014 20:03:22.934356  470544 machine.go:93] provisionDockerMachine start ...
	I1014 20:03:22.934472  470544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:03:22.954394  470544 main.go:141] libmachine: Using SSH client type: native
	I1014 20:03:22.954768  470544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32903 <nil> <nil>}
	I1014 20:03:22.954796  470544 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 20:03:22.955439  470544 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50642->127.0.0.1:32903: read: connection reset by peer
	I1014 20:03:26.104260  470544 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-579393
	
	I1014 20:03:26.104298  470544 ubuntu.go:182] provisioning hostname "ha-579393"
	I1014 20:03:26.104379  470544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:03:26.122921  470544 main.go:141] libmachine: Using SSH client type: native
	I1014 20:03:26.123167  470544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32903 <nil> <nil>}
	I1014 20:03:26.123185  470544 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-579393 && echo "ha-579393" | sudo tee /etc/hostname
	I1014 20:03:26.281180  470544 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-579393
	
	I1014 20:03:26.281286  470544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:03:26.299367  470544 main.go:141] libmachine: Using SSH client type: native
	I1014 20:03:26.299579  470544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32903 <nil> <nil>}
	I1014 20:03:26.299596  470544 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-579393' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-579393/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-579393' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 20:03:26.445909  470544 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 20:03:26.445941  470544 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-413763/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-413763/.minikube}
	I1014 20:03:26.445960  470544 ubuntu.go:190] setting up certificates
	I1014 20:03:26.445974  470544 provision.go:84] configureAuth start
	I1014 20:03:26.446042  470544 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-579393
	I1014 20:03:26.463014  470544 provision.go:143] copyHostCerts
	I1014 20:03:26.463059  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem
	I1014 20:03:26.463090  470544 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem, removing ...
	I1014 20:03:26.463099  470544 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem
	I1014 20:03:26.463169  470544 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem (1078 bytes)
	I1014 20:03:26.463255  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem
	I1014 20:03:26.463272  470544 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem, removing ...
	I1014 20:03:26.463279  470544 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem
	I1014 20:03:26.463304  470544 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem (1123 bytes)
	I1014 20:03:26.463350  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem
	I1014 20:03:26.463367  470544 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem, removing ...
	I1014 20:03:26.463373  470544 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem
	I1014 20:03:26.463396  470544 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem (1675 bytes)
	I1014 20:03:26.463447  470544 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca-key.pem org=jenkins.ha-579393 san=[127.0.0.1 192.168.49.2 ha-579393 localhost minikube]
	I1014 20:03:26.617910  470544 provision.go:177] copyRemoteCerts
	I1014 20:03:26.617976  470544 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 20:03:26.618022  470544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:03:26.636120  470544 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:03:26.739380  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1014 20:03:26.739452  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 20:03:26.759232  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1014 20:03:26.759293  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1014 20:03:26.778271  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1014 20:03:26.778338  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1014 20:03:26.796388  470544 provision.go:87] duration metric: took 350.39932ms to configureAuth
	I1014 20:03:26.796420  470544 ubuntu.go:206] setting minikube options for container-runtime
	I1014 20:03:26.796596  470544 config.go:182] Loaded profile config "ha-579393": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:03:26.796705  470544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:03:26.816035  470544 main.go:141] libmachine: Using SSH client type: native
	I1014 20:03:26.816243  470544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32903 <nil> <nil>}
	I1014 20:03:26.816259  470544 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 20:03:27.082126  470544 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 20:03:27.082156  470544 machine.go:96] duration metric: took 4.147772563s to provisionDockerMachine
	I1014 20:03:27.082171  470544 client.go:171] duration metric: took 9.752806403s to LocalClient.Create
	I1014 20:03:27.082197  470544 start.go:167] duration metric: took 9.752866506s to libmachine.API.Create "ha-579393"
	I1014 20:03:27.082205  470544 start.go:293] postStartSetup for "ha-579393" (driver="docker")
	I1014 20:03:27.082215  470544 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 20:03:27.082274  470544 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 20:03:27.082316  470544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:03:27.101460  470544 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:03:27.208078  470544 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 20:03:27.212053  470544 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1014 20:03:27.212086  470544 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1014 20:03:27.212100  470544 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-413763/.minikube/addons for local assets ...
	I1014 20:03:27.212182  470544 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-413763/.minikube/files for local assets ...
	I1014 20:03:27.212277  470544 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem -> 4173732.pem in /etc/ssl/certs
	I1014 20:03:27.212288  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem -> /etc/ssl/certs/4173732.pem
	I1014 20:03:27.212396  470544 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 20:03:27.220472  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem --> /etc/ssl/certs/4173732.pem (1708 bytes)
	I1014 20:03:27.241576  470544 start.go:296] duration metric: took 159.355524ms for postStartSetup
	I1014 20:03:27.241976  470544 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-579393
	I1014 20:03:27.259468  470544 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/config.json ...
	I1014 20:03:27.259849  470544 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1014 20:03:27.259907  470544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:03:27.277799  470544 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:03:27.378323  470544 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1014 20:03:27.383519  470544 start.go:128] duration metric: took 10.056234444s to createHost
	I1014 20:03:27.383548  470544 start.go:83] releasing machines lock for "ha-579393", held for 10.056356237s
	I1014 20:03:27.383629  470544 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-579393
	I1014 20:03:27.401699  470544 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 20:03:27.401709  470544 ssh_runner.go:195] Run: cat /version.json
	I1014 20:03:27.401815  470544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:03:27.401838  470544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:03:27.420176  470544 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:03:27.421057  470544 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:03:27.574708  470544 ssh_runner.go:195] Run: systemctl --version
	I1014 20:03:27.581776  470544 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 20:03:27.618049  470544 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 20:03:27.622981  470544 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 20:03:27.623059  470544 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 20:03:27.650696  470544 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1014 20:03:27.650726  470544 start.go:495] detecting cgroup driver to use...
	I1014 20:03:27.650795  470544 detect.go:190] detected "systemd" cgroup driver on host os
	I1014 20:03:27.650860  470544 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 20:03:27.668397  470544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 20:03:27.681391  470544 docker.go:218] disabling cri-docker service (if available) ...
	I1014 20:03:27.681446  470544 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 20:03:27.698246  470544 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 20:03:27.716479  470544 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 20:03:27.798818  470544 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 20:03:27.884317  470544 docker.go:234] disabling docker service ...
	I1014 20:03:27.884384  470544 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 20:03:27.905126  470544 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 20:03:27.918827  470544 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 20:03:28.002081  470544 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 20:03:28.084842  470544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 20:03:28.098220  470544 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 20:03:28.113305  470544 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1014 20:03:28.113364  470544 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:03:28.124477  470544 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1014 20:03:28.124559  470544 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:03:28.134261  470544 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:03:28.144071  470544 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:03:28.154359  470544 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 20:03:28.163636  470544 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:03:28.173644  470544 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:03:28.188326  470544 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:03:28.198228  470544 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 20:03:28.206234  470544 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 20:03:28.214019  470544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 20:03:28.295010  470544 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 20:03:28.401206  470544 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 20:03:28.401272  470544 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 20:03:28.405522  470544 start.go:563] Will wait 60s for crictl version
	I1014 20:03:28.405585  470544 ssh_runner.go:195] Run: which crictl
	I1014 20:03:28.409373  470544 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1014 20:03:28.435266  470544 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1014 20:03:28.435335  470544 ssh_runner.go:195] Run: crio --version
	I1014 20:03:28.465834  470544 ssh_runner.go:195] Run: crio --version
	I1014 20:03:28.497274  470544 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1014 20:03:28.498593  470544 cli_runner.go:164] Run: docker network inspect ha-579393 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1014 20:03:28.517029  470544 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1014 20:03:28.521498  470544 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 20:03:28.532817  470544 kubeadm.go:883] updating cluster {Name:ha-579393 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-579393 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 20:03:28.532940  470544 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 20:03:28.532992  470544 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 20:03:28.565925  470544 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 20:03:28.565951  470544 crio.go:433] Images already preloaded, skipping extraction
	I1014 20:03:28.566006  470544 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 20:03:28.592978  470544 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 20:03:28.593003  470544 cache_images.go:85] Images are preloaded, skipping loading
	I1014 20:03:28.593011  470544 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1014 20:03:28.593109  470544 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-579393 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-579393 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 20:03:28.593172  470544 ssh_runner.go:195] Run: crio config
	I1014 20:03:28.638570  470544 cni.go:84] Creating CNI manager for ""
	I1014 20:03:28.638590  470544 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1014 20:03:28.638604  470544 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1014 20:03:28.638626  470544 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-579393 NodeName:ha-579393 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1014 20:03:28.638736  470544 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-579393"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 20:03:28.638778  470544 kube-vip.go:115] generating kube-vip config ...
	I1014 20:03:28.638827  470544 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1014 20:03:28.651221  470544 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1014 20:03:28.651322  470544 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1014 20:03:28.651371  470544 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1014 20:03:28.659733  470544 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 20:03:28.659825  470544 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1014 20:03:28.667977  470544 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1014 20:03:28.681172  470544 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 20:03:28.697239  470544 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1014 20:03:28.710080  470544 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I1014 20:03:28.724688  470544 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1014 20:03:28.728568  470544 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 20:03:28.738656  470544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 20:03:28.817749  470544 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 20:03:28.841528  470544 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393 for IP: 192.168.49.2
	I1014 20:03:28.841566  470544 certs.go:195] generating shared ca certs ...
	I1014 20:03:28.841587  470544 certs.go:227] acquiring lock for ca certs: {Name:mk332cc83f1e1747534fc81e896bed94f24941d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:03:28.841727  470544 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.key
	I1014 20:03:28.841805  470544 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.key
	I1014 20:03:28.841821  470544 certs.go:257] generating profile certs ...
	I1014 20:03:28.841874  470544 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/client.key
	I1014 20:03:28.841897  470544 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/client.crt with IP's: []
	I1014 20:03:29.018063  470544 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/client.crt ...
	I1014 20:03:29.018101  470544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/client.crt: {Name:mk8b90bc05b294b6c05e808012d45472c3093f67 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:03:29.018299  470544 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/client.key ...
	I1014 20:03:29.018321  470544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/client.key: {Name:mk4670db425ebf46f3bf4968573343a975480683 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:03:29.018407  470544 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key.6d4a1612
	I1014 20:03:29.018424  470544 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt.6d4a1612 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I1014 20:03:29.208082  470544 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt.6d4a1612 ...
	I1014 20:03:29.208118  470544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt.6d4a1612: {Name:mk2e48e06bd7a0fd2aa3ea9def795ac03bded956 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:03:29.208287  470544 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key.6d4a1612 ...
	I1014 20:03:29.208300  470544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key.6d4a1612: {Name:mkc6fe9b4a3330b4fa61a71beeb137e948294421 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:03:29.209199  470544 certs.go:382] copying /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt.6d4a1612 -> /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt
	I1014 20:03:29.209315  470544 certs.go:386] copying /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key.6d4a1612 -> /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key
	I1014 20:03:29.209373  470544 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.key
	I1014 20:03:29.209389  470544 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.crt with IP's: []
	I1014 20:03:29.349734  470544 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.crt ...
	I1014 20:03:29.349788  470544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.crt: {Name:mk3c38e66fa21f9bf9f031b0b611fbb1d8c4882a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:03:29.349962  470544 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.key ...
	I1014 20:03:29.349973  470544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.key: {Name:mke28e4de33c7a0d50feb0b1335c5cd9e94d81db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:03:29.350047  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1014 20:03:29.350064  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1014 20:03:29.350075  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1014 20:03:29.350087  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1014 20:03:29.350099  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1014 20:03:29.350109  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1014 20:03:29.350122  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1014 20:03:29.350132  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1014 20:03:29.350183  470544 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/417373.pem (1338 bytes)
	W1014 20:03:29.350228  470544 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-413763/.minikube/certs/417373_empty.pem, impossibly tiny 0 bytes
	I1014 20:03:29.350237  470544 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca-key.pem (1675 bytes)
	I1014 20:03:29.350258  470544 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem (1078 bytes)
	I1014 20:03:29.350280  470544 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem (1123 bytes)
	I1014 20:03:29.350300  470544 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem (1675 bytes)
	I1014 20:03:29.350336  470544 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem (1708 bytes)
	I1014 20:03:29.350360  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:03:29.350373  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/417373.pem -> /usr/share/ca-certificates/417373.pem
	I1014 20:03:29.350387  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem -> /usr/share/ca-certificates/4173732.pem
	I1014 20:03:29.350927  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 20:03:29.369482  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1014 20:03:29.386797  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 20:03:29.404413  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1014 20:03:29.421955  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1014 20:03:29.439808  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1014 20:03:29.457222  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 20:03:29.475143  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1014 20:03:29.493300  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 20:03:29.513957  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/certs/417373.pem --> /usr/share/ca-certificates/417373.pem (1338 bytes)
	I1014 20:03:29.535163  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem --> /usr/share/ca-certificates/4173732.pem (1708 bytes)
	I1014 20:03:29.554358  470544 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 20:03:29.567671  470544 ssh_runner.go:195] Run: openssl version
	I1014 20:03:29.574116  470544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4173732.pem && ln -fs /usr/share/ca-certificates/4173732.pem /etc/ssl/certs/4173732.pem"
	I1014 20:03:29.582980  470544 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4173732.pem
	I1014 20:03:29.586713  470544 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 19:32 /usr/share/ca-certificates/4173732.pem
	I1014 20:03:29.586836  470544 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4173732.pem
	I1014 20:03:29.620973  470544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4173732.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 20:03:29.629990  470544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 20:03:29.638580  470544 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:03:29.642541  470544 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 19:15 /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:03:29.642595  470544 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:03:29.677097  470544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 20:03:29.687002  470544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/417373.pem && ln -fs /usr/share/ca-certificates/417373.pem /etc/ssl/certs/417373.pem"
	I1014 20:03:29.696267  470544 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/417373.pem
	I1014 20:03:29.700535  470544 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 19:32 /usr/share/ca-certificates/417373.pem
	I1014 20:03:29.700593  470544 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/417373.pem
	I1014 20:03:29.734895  470544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/417373.pem /etc/ssl/certs/51391683.0"
	I1014 20:03:29.744295  470544 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 20:03:29.748240  470544 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1014 20:03:29.748305  470544 kubeadm.go:400] StartCluster: {Name:ha-579393 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-579393 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 20:03:29.748380  470544 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 20:03:29.748448  470544 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 20:03:29.777054  470544 cri.go:89] found id: ""
	I1014 20:03:29.777134  470544 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 20:03:29.785507  470544 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 20:03:29.793651  470544 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1014 20:03:29.793711  470544 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 20:03:29.801881  470544 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 20:03:29.801906  470544 kubeadm.go:157] found existing configuration files:
	
	I1014 20:03:29.801956  470544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 20:03:29.809948  470544 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 20:03:29.810011  470544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 20:03:29.817979  470544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 20:03:29.825985  470544 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 20:03:29.826064  470544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 20:03:29.833833  470544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 20:03:29.842078  470544 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 20:03:29.842149  470544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 20:03:29.850122  470544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 20:03:29.858250  470544 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 20:03:29.858312  470544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 20:03:29.866004  470544 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1014 20:03:29.905901  470544 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1014 20:03:29.906013  470544 kubeadm.go:318] [preflight] Running pre-flight checks
	I1014 20:03:29.928412  470544 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1014 20:03:29.928498  470544 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1014 20:03:29.928541  470544 kubeadm.go:318] OS: Linux
	I1014 20:03:29.928583  470544 kubeadm.go:318] CGROUPS_CPU: enabled
	I1014 20:03:29.928652  470544 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1014 20:03:29.928730  470544 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1014 20:03:29.928805  470544 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1014 20:03:29.928849  470544 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1014 20:03:29.928892  470544 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1014 20:03:29.928935  470544 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1014 20:03:29.928973  470544 kubeadm.go:318] CGROUPS_IO: enabled
	I1014 20:03:29.989181  470544 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1014 20:03:29.989342  470544 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1014 20:03:29.989457  470544 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1014 20:03:29.997476  470544 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1014 20:03:30.000428  470544 out.go:252]   - Generating certificates and keys ...
	I1014 20:03:30.000531  470544 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1014 20:03:30.000656  470544 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1014 20:03:30.367367  470544 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1014 20:03:30.888441  470544 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1014 20:03:31.416284  470544 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1014 20:03:31.486302  470544 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1014 20:03:32.293304  470544 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1014 20:03:32.293457  470544 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [ha-579393 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1014 20:03:32.436942  470544 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1014 20:03:32.437134  470544 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [ha-579393 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1014 20:03:32.740861  470544 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1014 20:03:32.874202  470544 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1014 20:03:33.330864  470544 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1014 20:03:33.330961  470544 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1014 20:03:33.434687  470544 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1014 20:03:33.590351  470544 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1014 20:03:33.928031  470544 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1014 20:03:34.042691  470544 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1014 20:03:34.576186  470544 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1014 20:03:34.576637  470544 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1014 20:03:34.579016  470544 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1014 20:03:34.581440  470544 out.go:252]   - Booting up control plane ...
	I1014 20:03:34.581593  470544 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1014 20:03:34.581712  470544 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1014 20:03:34.581832  470544 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1014 20:03:34.595204  470544 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1014 20:03:34.595404  470544 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1014 20:03:34.601919  470544 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1014 20:03:34.602142  470544 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1014 20:03:34.602243  470544 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1014 20:03:34.699612  470544 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1014 20:03:34.699737  470544 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1014 20:03:35.200483  470544 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 500.970561ms
	I1014 20:03:35.205501  470544 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1014 20:03:35.205636  470544 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1014 20:03:35.205873  470544 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1014 20:03:35.205987  470544 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1014 20:07:35.206930  470544 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000676337s
	I1014 20:07:35.207172  470544 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000720088s
	I1014 20:07:35.207359  470544 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000743417s
	I1014 20:07:35.207371  470544 kubeadm.go:318] 
	I1014 20:07:35.207694  470544 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1014 20:07:35.208049  470544 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1014 20:07:35.208276  470544 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1014 20:07:35.208532  470544 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1014 20:07:35.208786  470544 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1014 20:07:35.209074  470544 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1014 20:07:35.209100  470544 kubeadm.go:318] 
	I1014 20:07:35.211976  470544 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1014 20:07:35.212145  470544 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1014 20:07:35.212706  470544 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1014 20:07:35.212843  470544 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	W1014 20:07:35.212972  470544 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-579393 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-579393 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 500.970561ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000676337s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000720088s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000743417s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1014 20:07:35.213050  470544 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1014 20:07:37.966951  470544 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.753873381s)
	I1014 20:07:37.967030  470544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 20:07:37.980538  470544 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1014 20:07:37.980613  470544 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 20:07:37.988822  470544 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 20:07:37.988844  470544 kubeadm.go:157] found existing configuration files:
	
	I1014 20:07:37.988897  470544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 20:07:37.996970  470544 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 20:07:37.997051  470544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 20:07:38.004797  470544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 20:07:38.012635  470544 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 20:07:38.012702  470544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 20:07:38.020175  470544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 20:07:38.028386  470544 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 20:07:38.028440  470544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 20:07:38.036154  470544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 20:07:38.044027  470544 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 20:07:38.044088  470544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 20:07:38.051422  470544 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1014 20:07:38.110505  470544 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1014 20:07:38.170186  470544 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1014 20:11:40.721242  470544 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1014 20:11:40.721491  470544 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1014 20:11:40.724650  470544 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1014 20:11:40.724789  470544 kubeadm.go:318] [preflight] Running pre-flight checks
	I1014 20:11:40.724937  470544 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1014 20:11:40.725018  470544 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1014 20:11:40.725068  470544 kubeadm.go:318] OS: Linux
	I1014 20:11:40.725125  470544 kubeadm.go:318] CGROUPS_CPU: enabled
	I1014 20:11:40.725181  470544 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1014 20:11:40.725248  470544 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1014 20:11:40.725310  470544 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1014 20:11:40.725365  470544 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1014 20:11:40.725423  470544 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1014 20:11:40.725473  470544 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1014 20:11:40.725534  470544 kubeadm.go:318] CGROUPS_IO: enabled
	I1014 20:11:40.725639  470544 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1014 20:11:40.725782  470544 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1014 20:11:40.725977  470544 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1014 20:11:40.726087  470544 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1014 20:11:40.728584  470544 out.go:252]   - Generating certificates and keys ...
	I1014 20:11:40.728668  470544 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1014 20:11:40.728723  470544 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1014 20:11:40.728820  470544 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1014 20:11:40.728895  470544 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1014 20:11:40.728974  470544 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1014 20:11:40.729051  470544 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1014 20:11:40.729150  470544 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1014 20:11:40.729214  470544 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1014 20:11:40.729282  470544 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1014 20:11:40.729340  470544 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1014 20:11:40.729378  470544 kubeadm.go:318] [certs] Using the existing "sa" key
	I1014 20:11:40.729422  470544 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1014 20:11:40.729466  470544 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1014 20:11:40.729531  470544 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1014 20:11:40.729604  470544 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1014 20:11:40.729710  470544 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1014 20:11:40.729805  470544 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1014 20:11:40.729913  470544 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1014 20:11:40.730020  470544 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1014 20:11:40.731279  470544 out.go:252]   - Booting up control plane ...
	I1014 20:11:40.731376  470544 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1014 20:11:40.731472  470544 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1014 20:11:40.731563  470544 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1014 20:11:40.731676  470544 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1014 20:11:40.731820  470544 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1014 20:11:40.731960  470544 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1014 20:11:40.732060  470544 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1014 20:11:40.732099  470544 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1014 20:11:40.732241  470544 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1014 20:11:40.732368  470544 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1014 20:11:40.732459  470544 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001170855s
	I1014 20:11:40.732550  470544 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1014 20:11:40.732649  470544 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1014 20:11:40.732789  470544 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1014 20:11:40.732875  470544 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1014 20:11:40.732961  470544 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000981646s
	I1014 20:11:40.733076  470544 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001242346s
	I1014 20:11:40.733142  470544 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001205382s
	I1014 20:11:40.733157  470544 kubeadm.go:318] 
	I1014 20:11:40.733272  470544 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1014 20:11:40.733349  470544 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1014 20:11:40.733417  470544 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1014 20:11:40.733491  470544 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1014 20:11:40.733553  470544 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1014 20:11:40.733641  470544 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1014 20:11:40.733684  470544 kubeadm.go:318] 
	I1014 20:11:40.733748  470544 kubeadm.go:402] duration metric: took 8m10.985445817s to StartCluster
	I1014 20:11:40.733824  470544 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 20:11:40.733881  470544 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 20:11:40.762474  470544 cri.go:89] found id: ""
	I1014 20:11:40.762524  470544 logs.go:282] 0 containers: []
	W1014 20:11:40.762538  470544 logs.go:284] No container was found matching "kube-apiserver"
	I1014 20:11:40.762545  470544 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 20:11:40.762602  470544 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 20:11:40.789961  470544 cri.go:89] found id: ""
	I1014 20:11:40.789989  470544 logs.go:282] 0 containers: []
	W1014 20:11:40.789999  470544 logs.go:284] No container was found matching "etcd"
	I1014 20:11:40.790007  470544 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 20:11:40.790062  470544 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 20:11:40.817095  470544 cri.go:89] found id: ""
	I1014 20:11:40.817128  470544 logs.go:282] 0 containers: []
	W1014 20:11:40.817141  470544 logs.go:284] No container was found matching "coredns"
	I1014 20:11:40.817148  470544 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 20:11:40.817206  470544 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 20:11:40.843942  470544 cri.go:89] found id: ""
	I1014 20:11:40.843974  470544 logs.go:282] 0 containers: []
	W1014 20:11:40.843984  470544 logs.go:284] No container was found matching "kube-scheduler"
	I1014 20:11:40.843991  470544 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 20:11:40.844054  470544 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 20:11:40.870262  470544 cri.go:89] found id: ""
	I1014 20:11:40.870289  470544 logs.go:282] 0 containers: []
	W1014 20:11:40.870299  470544 logs.go:284] No container was found matching "kube-proxy"
	I1014 20:11:40.870308  470544 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 20:11:40.870377  470544 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 20:11:40.896558  470544 cri.go:89] found id: ""
	I1014 20:11:40.896588  470544 logs.go:282] 0 containers: []
	W1014 20:11:40.896597  470544 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 20:11:40.896604  470544 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 20:11:40.896660  470544 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 20:11:40.923171  470544 cri.go:89] found id: ""
	I1014 20:11:40.923202  470544 logs.go:282] 0 containers: []
	W1014 20:11:40.923214  470544 logs.go:284] No container was found matching "kindnet"
	I1014 20:11:40.923225  470544 logs.go:123] Gathering logs for kubelet ...
	I1014 20:11:40.923237  470544 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 20:11:40.991897  470544 logs.go:123] Gathering logs for dmesg ...
	I1014 20:11:40.991944  470544 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 20:11:41.010371  470544 logs.go:123] Gathering logs for describe nodes ...
	I1014 20:11:41.010404  470544 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 20:11:41.071387  470544 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 20:11:41.064404    2582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:11:41.064977    2582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:11:41.066171    2582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:11:41.066648    2582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:11:41.068287    2582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 20:11:41.064404    2582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:11:41.064977    2582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:11:41.066171    2582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:11:41.066648    2582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:11:41.068287    2582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 20:11:41.071407  470544 logs.go:123] Gathering logs for CRI-O ...
	I1014 20:11:41.071419  470544 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 20:11:41.133347  470544 logs.go:123] Gathering logs for container status ...
	I1014 20:11:41.133392  470544 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1014 20:11:41.166639  470544 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001170855s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000981646s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001242346s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001205382s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1014 20:11:41.166697  470544 out.go:285] * 
	W1014 20:11:41.166793  470544 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001170855s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000981646s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001242346s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001205382s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1014 20:11:41.166813  470544 out.go:285] * 
	W1014 20:11:41.168436  470544 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 20:11:41.172303  470544 out.go:203] 
	W1014 20:11:41.173765  470544 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001170855s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000981646s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001242346s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001205382s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1014 20:11:41.173801  470544 out.go:285] * 
	I1014 20:11:41.176311  470544 out.go:203] 
	
	
	==> CRI-O <==
	Oct 14 20:12:57 ha-579393 crio[778]: time="2025-10-14T20:12:57.460303725Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:12:57 ha-579393 crio[778]: time="2025-10-14T20:12:57.460785139Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:12:57 ha-579393 crio[778]: time="2025-10-14T20:12:57.461766752Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:12:57 ha-579393 crio[778]: time="2025-10-14T20:12:57.462288762Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:12:57 ha-579393 crio[778]: time="2025-10-14T20:12:57.48128067Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=ba9a3535-5c46-4278-8b81-cf99a5fce5ee name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:12:57 ha-579393 crio[778]: time="2025-10-14T20:12:57.48192273Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=0e15fee7-fb21-4c78-b7c1-f4cdd6b3ab37 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:12:57 ha-579393 crio[778]: time="2025-10-14T20:12:57.482937984Z" level=info msg="createCtr: deleting container ID 13267deb92140ae09a619e9f9948fd9715254273819dbe61130eaa9df39a2123 from idIndex" id=ba9a3535-5c46-4278-8b81-cf99a5fce5ee name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:12:57 ha-579393 crio[778]: time="2025-10-14T20:12:57.482979701Z" level=info msg="createCtr: removing container 13267deb92140ae09a619e9f9948fd9715254273819dbe61130eaa9df39a2123" id=ba9a3535-5c46-4278-8b81-cf99a5fce5ee name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:12:57 ha-579393 crio[778]: time="2025-10-14T20:12:57.483024499Z" level=info msg="createCtr: deleting container 13267deb92140ae09a619e9f9948fd9715254273819dbe61130eaa9df39a2123 from storage" id=ba9a3535-5c46-4278-8b81-cf99a5fce5ee name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:12:57 ha-579393 crio[778]: time="2025-10-14T20:12:57.48342172Z" level=info msg="createCtr: deleting container ID 1de87d6b72a75dded6420210657397b777ace5cdc49362907789517ac1f87bd8 from idIndex" id=0e15fee7-fb21-4c78-b7c1-f4cdd6b3ab37 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:12:57 ha-579393 crio[778]: time="2025-10-14T20:12:57.483468415Z" level=info msg="createCtr: removing container 1de87d6b72a75dded6420210657397b777ace5cdc49362907789517ac1f87bd8" id=0e15fee7-fb21-4c78-b7c1-f4cdd6b3ab37 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:12:57 ha-579393 crio[778]: time="2025-10-14T20:12:57.48351067Z" level=info msg="createCtr: deleting container 1de87d6b72a75dded6420210657397b777ace5cdc49362907789517ac1f87bd8 from storage" id=0e15fee7-fb21-4c78-b7c1-f4cdd6b3ab37 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:12:57 ha-579393 crio[778]: time="2025-10-14T20:12:57.487098353Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-579393_kube-system_514451ea1eb9e52e24cc36daace2ea4a_0" id=ba9a3535-5c46-4278-8b81-cf99a5fce5ee name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:12:57 ha-579393 crio[778]: time="2025-10-14T20:12:57.487389405Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-ha-579393_kube-system_06869f9c490a5fffb50940ac23939d18_0" id=0e15fee7-fb21-4c78-b7c1-f4cdd6b3ab37 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:13:06 ha-579393 crio[778]: time="2025-10-14T20:13:06.453280312Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=24d6610c-9e29-400c-8183-0b358ec1be79 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 20:13:06 ha-579393 crio[778]: time="2025-10-14T20:13:06.454302065Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=d3b7311a-885d-46cc-95e1-dd42b99ac5d9 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 20:13:06 ha-579393 crio[778]: time="2025-10-14T20:13:06.455237064Z" level=info msg="Creating container: kube-system/etcd-ha-579393/etcd" id=51c2ef18-2a57-46d7-ad18-f91666afadb6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:13:06 ha-579393 crio[778]: time="2025-10-14T20:13:06.455462801Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:13:06 ha-579393 crio[778]: time="2025-10-14T20:13:06.459510932Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:13:06 ha-579393 crio[778]: time="2025-10-14T20:13:06.46003743Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:13:06 ha-579393 crio[778]: time="2025-10-14T20:13:06.475532461Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=51c2ef18-2a57-46d7-ad18-f91666afadb6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:13:06 ha-579393 crio[778]: time="2025-10-14T20:13:06.477100933Z" level=info msg="createCtr: deleting container ID 667c339cd88f614ede8bc1aa048d0e845431c4dd40ce2b1070ed8f24bbd20700 from idIndex" id=51c2ef18-2a57-46d7-ad18-f91666afadb6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:13:06 ha-579393 crio[778]: time="2025-10-14T20:13:06.477143128Z" level=info msg="createCtr: removing container 667c339cd88f614ede8bc1aa048d0e845431c4dd40ce2b1070ed8f24bbd20700" id=51c2ef18-2a57-46d7-ad18-f91666afadb6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:13:06 ha-579393 crio[778]: time="2025-10-14T20:13:06.477176634Z" level=info msg="createCtr: deleting container 667c339cd88f614ede8bc1aa048d0e845431c4dd40ce2b1070ed8f24bbd20700 from storage" id=51c2ef18-2a57-46d7-ad18-f91666afadb6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:13:06 ha-579393 crio[778]: time="2025-10-14T20:13:06.47952167Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-579393_kube-system_949fee8892a6b2444a3aa0dec92a7837_0" id=51c2ef18-2a57-46d7-ad18-f91666afadb6 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 20:13:07.656944    3685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:13:07.657547    3685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:13:07.659112    3685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:13:07.659518    3685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:13:07.661182    3685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 91 42 a8 46 f5 08 06
	[ +33.952077] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff aa 17 60 38 7c 63 08 06
	[  +0.961658] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ba 77 81 98 05 a1 08 06
	[  +0.043334] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 8a 2f 57 55 4c d7 08 06
	[  +4.681081] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f6 ac 51 a4 b2 5a 08 06
	[Oct14 19:04] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e6 d1 de e1 85 25 08 06
	[  +0.899784] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ae f2 fe 51 3d 95 08 06
	[  +0.040749] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 82 c6 36 89 b8 4a 08 06
	[  +4.162603] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c6 62 d5 5c 0e ef 08 06
	[ +30.801371] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 06 ae e5 28 c7 39 08 06
	[  +0.817897] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ae 5e 5e 90 62 1a 08 06
	[  +0.046959] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 96 8f 1b bd b9 9f 08 06
	[Oct14 19:05] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 56 59 c3 7c db c4 08 06
	
	
	==> kernel <==
	 20:13:07 up  2:55,  0 user,  load average: 0.22, 0.09, 0.51
	Linux ha-579393 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 14 20:12:57 ha-579393 kubelet[1963]:         container kube-controller-manager start failed in pod kube-controller-manager-ha-579393_kube-system(514451ea1eb9e52e24cc36daace2ea4a): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 14 20:12:57 ha-579393 kubelet[1963]:  > logger="UnhandledError"
	Oct 14 20:12:57 ha-579393 kubelet[1963]: E1014 20:12:57.487567    1963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-579393" podUID="514451ea1eb9e52e24cc36daace2ea4a"
	Oct 14 20:12:57 ha-579393 kubelet[1963]: E1014 20:12:57.487619    1963 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 14 20:12:57 ha-579393 kubelet[1963]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 14 20:12:57 ha-579393 kubelet[1963]:  > podSandboxID="26eabc9a05c338cff1ebd4ea1b580692dcb1accc6b0e23f61f6a228d1f73adce"
	Oct 14 20:12:57 ha-579393 kubelet[1963]: E1014 20:12:57.487709    1963 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 14 20:12:57 ha-579393 kubelet[1963]:         container kube-apiserver start failed in pod kube-apiserver-ha-579393_kube-system(06869f9c490a5fffb50940ac23939d18): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 14 20:12:57 ha-579393 kubelet[1963]:  > logger="UnhandledError"
	Oct 14 20:12:57 ha-579393 kubelet[1963]: E1014 20:12:57.488839    1963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-ha-579393" podUID="06869f9c490a5fffb50940ac23939d18"
	Oct 14 20:12:58 ha-579393 kubelet[1963]: E1014 20:12:58.640597    1963 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-579393.186e746019ae0b94  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-579393,UID:ha-579393,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-579393 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-579393,},FirstTimestamp:2025-10-14 20:07:40.444961684 +0000 UTC m=+0.732526825,LastTimestamp:2025-10-14 20:07:40.444961684 +0000 UTC m=+0.732526825,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-579393,}"
	Oct 14 20:13:00 ha-579393 kubelet[1963]: E1014 20:13:00.037350    1963 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://192.168.49.2:8443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError"
	Oct 14 20:13:00 ha-579393 kubelet[1963]: E1014 20:13:00.473717    1963 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-579393\" not found"
	Oct 14 20:13:00 ha-579393 kubelet[1963]: E1014 20:13:00.508888    1963 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://192.168.49.2:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass"
	Oct 14 20:13:01 ha-579393 kubelet[1963]: E1014 20:13:01.091624    1963 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-579393?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 14 20:13:01 ha-579393 kubelet[1963]: I1014 20:13:01.263563    1963 kubelet_node_status.go:75] "Attempting to register node" node="ha-579393"
	Oct 14 20:13:01 ha-579393 kubelet[1963]: E1014 20:13:01.263995    1963 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-579393"
	Oct 14 20:13:06 ha-579393 kubelet[1963]: E1014 20:13:06.452640    1963 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-579393\" not found" node="ha-579393"
	Oct 14 20:13:06 ha-579393 kubelet[1963]: E1014 20:13:06.479888    1963 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 14 20:13:06 ha-579393 kubelet[1963]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 14 20:13:06 ha-579393 kubelet[1963]:  > podSandboxID="41ac2f349da00920582806a729366af02d901203fe089532947fdee2d8b61fa0"
	Oct 14 20:13:06 ha-579393 kubelet[1963]: E1014 20:13:06.480004    1963 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 14 20:13:06 ha-579393 kubelet[1963]:         container etcd start failed in pod etcd-ha-579393_kube-system(949fee8892a6b2444a3aa0dec92a7837): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 14 20:13:06 ha-579393 kubelet[1963]:  > logger="UnhandledError"
	Oct 14 20:13:06 ha-579393 kubelet[1963]: E1014 20:13:06.480035    1963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-ha-579393" podUID="949fee8892a6b2444a3aa0dec92a7837"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-579393 -n ha-579393
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-579393 -n ha-579393: exit status 6 (300.253621ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1014 20:13:08.045272  479382 status.go:458] kubeconfig endpoint: get endpoint: "ha-579393" does not appear in /home/jenkins/minikube-integration/21409-413763/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "ha-579393" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (1.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-579393 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-579393 status --output json --alsologtostderr -v 5: exit status 6 (300.089597ms)

                                                
                                                
-- stdout --
	{"Name":"ha-579393","Host":"Running","Kubelet":"Running","APIServer":"Stopped","Kubeconfig":"Misconfigured","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 20:13:08.107159  479496 out.go:360] Setting OutFile to fd 1 ...
	I1014 20:13:08.107423  479496 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:13:08.107438  479496 out.go:374] Setting ErrFile to fd 2...
	I1014 20:13:08.107443  479496 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:13:08.107691  479496 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-413763/.minikube/bin
	I1014 20:13:08.107892  479496 out.go:368] Setting JSON to true
	I1014 20:13:08.107928  479496 mustload.go:65] Loading cluster: ha-579393
	I1014 20:13:08.108085  479496 notify.go:220] Checking for updates...
	I1014 20:13:08.108373  479496 config.go:182] Loaded profile config "ha-579393": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:13:08.108396  479496 status.go:174] checking status of ha-579393 ...
	I1014 20:13:08.109025  479496 cli_runner.go:164] Run: docker container inspect ha-579393 --format={{.State.Status}}
	I1014 20:13:08.128304  479496 status.go:371] ha-579393 host status = "Running" (err=<nil>)
	I1014 20:13:08.128354  479496 host.go:66] Checking if "ha-579393" exists ...
	I1014 20:13:08.128629  479496 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-579393
	I1014 20:13:08.146686  479496 host.go:66] Checking if "ha-579393" exists ...
	I1014 20:13:08.147021  479496 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1014 20:13:08.147088  479496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:13:08.166813  479496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:13:08.268454  479496 ssh_runner.go:195] Run: systemctl --version
	I1014 20:13:08.274741  479496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 20:13:08.287210  479496 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 20:13:08.345661  479496 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-14 20:13:08.334694365 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	E1014 20:13:08.346112  479496 status.go:458] kubeconfig endpoint: get endpoint: "ha-579393" does not appear in /home/jenkins/minikube-integration/21409-413763/kubeconfig
	I1014 20:13:08.346148  479496 api_server.go:166] Checking apiserver status ...
	I1014 20:13:08.346183  479496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1014 20:13:08.356662  479496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1014 20:13:08.356692  479496 status.go:463] ha-579393 apiserver status = Running (err=<nil>)
	I1014 20:13:08.356705  479496 status.go:176] ha-579393 status: &{Name:ha-579393 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:330: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-579393 status --output json --alsologtostderr -v 5" : exit status 6
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/CopyFile]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/CopyFile]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-579393
helpers_test.go:243: (dbg) docker inspect ha-579393:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2",
	        "Created": "2025-10-14T20:03:22.416166993Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 471115,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-14T20:03:22.453114626Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2/hostname",
	        "HostsPath": "/var/lib/docker/containers/e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2/hosts",
	        "LogPath": "/var/lib/docker/containers/e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2/e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2-json.log",
	        "Name": "/ha-579393",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-579393:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-579393",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2",
	                "LowerDir": "/var/lib/docker/overlay2/f2862f7f555e5163c5fb738e59e646f87e5b6f6dc38621bbd5cf8a3ccf2dc40d-init/diff:/var/lib/docker/overlay2/51cca15559205c73df9571f03495ed8a29f085405673af5fef2c1ba7c695d8af/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f2862f7f555e5163c5fb738e59e646f87e5b6f6dc38621bbd5cf8a3ccf2dc40d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f2862f7f555e5163c5fb738e59e646f87e5b6f6dc38621bbd5cf8a3ccf2dc40d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f2862f7f555e5163c5fb738e59e646f87e5b6f6dc38621bbd5cf8a3ccf2dc40d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-579393",
	                "Source": "/var/lib/docker/volumes/ha-579393/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-579393",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-579393",
	                "name.minikube.sigs.k8s.io": "ha-579393",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a1c2b52fae2ff440ed705eb97dd81ba6bb6415972c195c1ca3bec92d8e7f50f0",
	            "SandboxKey": "/var/run/docker/netns/a1c2b52fae2f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32903"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32904"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32907"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32905"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32906"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-579393": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "92:ce:80:cd:a9:2b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d686e445c44d566d468791372c74214f3cfc87adaf2fbedd68822ff7d3ce8955",
	                    "EndpointID": "9e96e7d478fc0073b7c8e78f8945763db207596a9030627a1780b04c90be2b93",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-579393",
	                        "e999454f4483"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-579393 -n ha-579393
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-579393 -n ha-579393: exit status 6 (301.504207ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1014 20:13:08.666668  479622 status.go:458] kubeconfig endpoint: get endpoint: "ha-579393" does not appear in /home/jenkins/minikube-integration/21409-413763/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/CopyFile FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/CopyFile]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-579393 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/CopyFile logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                      ARGS                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ update-context │ functional-744288 update-context --alsologtostderr -v=2                                                         │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ update-context │ functional-744288 update-context --alsologtostderr -v=2                                                         │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ image          │ functional-744288 image build -t localhost/my-image:functional-744288 testdata/build --alsologtostderr          │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ update-context │ functional-744288 update-context --alsologtostderr -v=2                                                         │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ image          │ functional-744288 image ls                                                                                      │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ delete         │ -p functional-744288                                                                                            │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 20:03 UTC │ 14 Oct 25 20:03 UTC │
	│ start          │ ha-579393 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:03 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml                                                │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- rollout status deployment/busybox                                                          │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:12 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:12 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:12 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- exec  -- nslookup kubernetes.io                                                            │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- exec  -- nslookup kubernetes.default                                                       │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                                     │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ node           │ ha-579393 node add --alsologtostderr -v 5                                                                       │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/14 20:03:17
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1014 20:03:17.125360  470544 out.go:360] Setting OutFile to fd 1 ...
	I1014 20:03:17.125666  470544 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:03:17.125678  470544 out.go:374] Setting ErrFile to fd 2...
	I1014 20:03:17.125685  470544 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:03:17.125940  470544 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-413763/.minikube/bin
	I1014 20:03:17.126490  470544 out.go:368] Setting JSON to false
	I1014 20:03:17.127467  470544 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":9943,"bootTime":1760462254,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1014 20:03:17.127588  470544 start.go:141] virtualization: kvm guest
	I1014 20:03:17.129767  470544 out.go:179] * [ha-579393] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1014 20:03:17.131241  470544 out.go:179]   - MINIKUBE_LOCATION=21409
	I1014 20:03:17.131264  470544 notify.go:220] Checking for updates...
	I1014 20:03:17.134306  470544 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 20:03:17.135806  470544 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-413763/kubeconfig
	I1014 20:03:17.137119  470544 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-413763/.minikube
	I1014 20:03:17.138379  470544 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1014 20:03:17.140082  470544 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 20:03:17.141662  470544 driver.go:421] Setting default libvirt URI to qemu:///system
	I1014 20:03:17.165916  470544 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1014 20:03:17.166098  470544 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 20:03:17.229548  470544 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-14 20:03:17.218250431 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1014 20:03:17.229650  470544 docker.go:318] overlay module found
	I1014 20:03:17.231449  470544 out.go:179] * Using the docker driver based on user configuration
	I1014 20:03:17.232741  470544 start.go:305] selected driver: docker
	I1014 20:03:17.232773  470544 start.go:925] validating driver "docker" against <nil>
	I1014 20:03:17.232790  470544 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 20:03:17.233313  470544 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 20:03:17.295257  470544 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-14 20:03:17.284941769 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1014 20:03:17.295445  470544 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1014 20:03:17.295657  470544 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 20:03:17.297506  470544 out.go:179] * Using Docker driver with root privileges
	I1014 20:03:17.298873  470544 cni.go:84] Creating CNI manager for ""
	I1014 20:03:17.298932  470544 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1014 20:03:17.298947  470544 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1014 20:03:17.299040  470544 start.go:349] cluster config:
	{Name:ha-579393 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-579393 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPaus
eInterval:1m0s}
	I1014 20:03:17.300487  470544 out.go:179] * Starting "ha-579393" primary control-plane node in "ha-579393" cluster
	I1014 20:03:17.301710  470544 cache.go:123] Beginning downloading kic base image for docker with crio
	I1014 20:03:17.302965  470544 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1014 20:03:17.304134  470544 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 20:03:17.304173  470544 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-413763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1014 20:03:17.304183  470544 cache.go:58] Caching tarball of preloaded images
	I1014 20:03:17.304233  470544 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1014 20:03:17.304269  470544 preload.go:233] Found /home/jenkins/minikube-integration/21409-413763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1014 20:03:17.304279  470544 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1014 20:03:17.304557  470544 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/config.json ...
	I1014 20:03:17.304580  470544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/config.json: {Name:mk533f81ade9d1a5f526dccc10d22b964ab1abab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:03:17.326336  470544 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1014 20:03:17.326357  470544 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1014 20:03:17.326374  470544 cache.go:232] Successfully downloaded all kic artifacts
	I1014 20:03:17.326399  470544 start.go:360] acquireMachinesLock for ha-579393: {Name:mk1f05a584fb3418ecbc78031a467c0e06c7a892 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 20:03:17.327173  470544 start.go:364] duration metric: took 757.56µs to acquireMachinesLock for "ha-579393"
	I1014 20:03:17.327207  470544 start.go:93] Provisioning new machine with config: &{Name:ha-579393 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-579393 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 20:03:17.327266  470544 start.go:125] createHost starting for "" (driver="docker")
	I1014 20:03:17.329132  470544 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1014 20:03:17.329332  470544 start.go:159] libmachine.API.Create for "ha-579393" (driver="docker")
	I1014 20:03:17.329358  470544 client.go:168] LocalClient.Create starting
	I1014 20:03:17.329426  470544 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem
	I1014 20:03:17.329458  470544 main.go:141] libmachine: Decoding PEM data...
	I1014 20:03:17.329469  470544 main.go:141] libmachine: Parsing certificate...
	I1014 20:03:17.329531  470544 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem
	I1014 20:03:17.329556  470544 main.go:141] libmachine: Decoding PEM data...
	I1014 20:03:17.329563  470544 main.go:141] libmachine: Parsing certificate...
	I1014 20:03:17.329904  470544 cli_runner.go:164] Run: docker network inspect ha-579393 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1014 20:03:17.347467  470544 cli_runner.go:211] docker network inspect ha-579393 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1014 20:03:17.347535  470544 network_create.go:284] running [docker network inspect ha-579393] to gather additional debugging logs...
	I1014 20:03:17.347555  470544 cli_runner.go:164] Run: docker network inspect ha-579393
	W1014 20:03:17.364018  470544 cli_runner.go:211] docker network inspect ha-579393 returned with exit code 1
	I1014 20:03:17.364049  470544 network_create.go:287] error running [docker network inspect ha-579393]: docker network inspect ha-579393: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-579393 not found
	I1014 20:03:17.364062  470544 network_create.go:289] output of [docker network inspect ha-579393]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-579393 not found
	
	** /stderr **
	I1014 20:03:17.364179  470544 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1014 20:03:17.381335  470544 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001946000}
	I1014 20:03:17.381374  470544 network_create.go:124] attempt to create docker network ha-579393 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1014 20:03:17.381422  470544 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-579393 ha-579393
	I1014 20:03:17.438306  470544 network_create.go:108] docker network ha-579393 192.168.49.0/24 created
	I1014 20:03:17.438342  470544 kic.go:121] calculated static IP "192.168.49.2" for the "ha-579393" container
	I1014 20:03:17.438422  470544 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1014 20:03:17.455388  470544 cli_runner.go:164] Run: docker volume create ha-579393 --label name.minikube.sigs.k8s.io=ha-579393 --label created_by.minikube.sigs.k8s.io=true
	I1014 20:03:17.474494  470544 oci.go:103] Successfully created a docker volume ha-579393
	I1014 20:03:17.474585  470544 cli_runner.go:164] Run: docker run --rm --name ha-579393-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-579393 --entrypoint /usr/bin/test -v ha-579393:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1014 20:03:17.868197  470544 oci.go:107] Successfully prepared a docker volume ha-579393
	I1014 20:03:17.868264  470544 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 20:03:17.868291  470544 kic.go:194] Starting extracting preloaded images to volume ...
	I1014 20:03:17.868380  470544 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21409-413763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-579393:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	I1014 20:03:22.341626  470544 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21409-413763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-579393:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (4.473193247s)
	I1014 20:03:22.341663  470544 kic.go:203] duration metric: took 4.47336734s to extract preloaded images to volume ...
	W1014 20:03:22.341815  470544 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1014 20:03:22.341863  470544 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1014 20:03:22.341916  470544 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1014 20:03:22.400050  470544 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-579393 --name ha-579393 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-579393 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-579393 --network ha-579393 --ip 192.168.49.2 --volume ha-579393:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1014 20:03:22.677726  470544 cli_runner.go:164] Run: docker container inspect ha-579393 --format={{.State.Running}}
	I1014 20:03:22.696026  470544 cli_runner.go:164] Run: docker container inspect ha-579393 --format={{.State.Status}}
	I1014 20:03:22.715378  470544 cli_runner.go:164] Run: docker exec ha-579393 stat /var/lib/dpkg/alternatives/iptables
	I1014 20:03:22.762223  470544 oci.go:144] the created container "ha-579393" has a running status.
	I1014 20:03:22.762255  470544 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa...
	I1014 20:03:22.820780  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1014 20:03:22.820832  470544 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1014 20:03:22.850515  470544 cli_runner.go:164] Run: docker container inspect ha-579393 --format={{.State.Status}}
	I1014 20:03:22.870190  470544 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1014 20:03:22.870210  470544 kic_runner.go:114] Args: [docker exec --privileged ha-579393 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1014 20:03:22.912447  470544 cli_runner.go:164] Run: docker container inspect ha-579393 --format={{.State.Status}}
	I1014 20:03:22.934356  470544 machine.go:93] provisionDockerMachine start ...
	I1014 20:03:22.934472  470544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:03:22.954394  470544 main.go:141] libmachine: Using SSH client type: native
	I1014 20:03:22.954768  470544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32903 <nil> <nil>}
	I1014 20:03:22.954796  470544 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 20:03:22.955439  470544 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50642->127.0.0.1:32903: read: connection reset by peer
	I1014 20:03:26.104260  470544 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-579393
	
	I1014 20:03:26.104298  470544 ubuntu.go:182] provisioning hostname "ha-579393"
	I1014 20:03:26.104379  470544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:03:26.122921  470544 main.go:141] libmachine: Using SSH client type: native
	I1014 20:03:26.123167  470544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32903 <nil> <nil>}
	I1014 20:03:26.123185  470544 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-579393 && echo "ha-579393" | sudo tee /etc/hostname
	I1014 20:03:26.281180  470544 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-579393
	
	I1014 20:03:26.281286  470544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:03:26.299367  470544 main.go:141] libmachine: Using SSH client type: native
	I1014 20:03:26.299579  470544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32903 <nil> <nil>}
	I1014 20:03:26.299596  470544 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-579393' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-579393/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-579393' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 20:03:26.445909  470544 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 20:03:26.445941  470544 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-413763/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-413763/.minikube}
	I1014 20:03:26.445960  470544 ubuntu.go:190] setting up certificates
	I1014 20:03:26.445974  470544 provision.go:84] configureAuth start
	I1014 20:03:26.446042  470544 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-579393
	I1014 20:03:26.463014  470544 provision.go:143] copyHostCerts
	I1014 20:03:26.463059  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem
	I1014 20:03:26.463090  470544 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem, removing ...
	I1014 20:03:26.463099  470544 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem
	I1014 20:03:26.463169  470544 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem (1078 bytes)
	I1014 20:03:26.463255  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem
	I1014 20:03:26.463272  470544 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem, removing ...
	I1014 20:03:26.463279  470544 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem
	I1014 20:03:26.463304  470544 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem (1123 bytes)
	I1014 20:03:26.463350  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem
	I1014 20:03:26.463367  470544 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem, removing ...
	I1014 20:03:26.463373  470544 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem
	I1014 20:03:26.463396  470544 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem (1675 bytes)
	I1014 20:03:26.463447  470544 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca-key.pem org=jenkins.ha-579393 san=[127.0.0.1 192.168.49.2 ha-579393 localhost minikube]
	I1014 20:03:26.617910  470544 provision.go:177] copyRemoteCerts
	I1014 20:03:26.617976  470544 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 20:03:26.618022  470544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:03:26.636120  470544 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:03:26.739380  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1014 20:03:26.739452  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 20:03:26.759232  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1014 20:03:26.759293  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1014 20:03:26.778271  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1014 20:03:26.778338  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1014 20:03:26.796388  470544 provision.go:87] duration metric: took 350.39932ms to configureAuth
	I1014 20:03:26.796420  470544 ubuntu.go:206] setting minikube options for container-runtime
	I1014 20:03:26.796596  470544 config.go:182] Loaded profile config "ha-579393": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:03:26.796705  470544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:03:26.816035  470544 main.go:141] libmachine: Using SSH client type: native
	I1014 20:03:26.816243  470544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32903 <nil> <nil>}
	I1014 20:03:26.816259  470544 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 20:03:27.082126  470544 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 20:03:27.082156  470544 machine.go:96] duration metric: took 4.147772563s to provisionDockerMachine
	I1014 20:03:27.082171  470544 client.go:171] duration metric: took 9.752806403s to LocalClient.Create
	I1014 20:03:27.082197  470544 start.go:167] duration metric: took 9.752866506s to libmachine.API.Create "ha-579393"
	I1014 20:03:27.082205  470544 start.go:293] postStartSetup for "ha-579393" (driver="docker")
	I1014 20:03:27.082215  470544 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 20:03:27.082274  470544 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 20:03:27.082316  470544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:03:27.101460  470544 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:03:27.208078  470544 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 20:03:27.212053  470544 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1014 20:03:27.212086  470544 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1014 20:03:27.212100  470544 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-413763/.minikube/addons for local assets ...
	I1014 20:03:27.212182  470544 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-413763/.minikube/files for local assets ...
	I1014 20:03:27.212277  470544 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem -> 4173732.pem in /etc/ssl/certs
	I1014 20:03:27.212288  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem -> /etc/ssl/certs/4173732.pem
	I1014 20:03:27.212396  470544 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 20:03:27.220472  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem --> /etc/ssl/certs/4173732.pem (1708 bytes)
	I1014 20:03:27.241576  470544 start.go:296] duration metric: took 159.355524ms for postStartSetup
	I1014 20:03:27.241976  470544 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-579393
	I1014 20:03:27.259468  470544 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/config.json ...
	I1014 20:03:27.259849  470544 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1014 20:03:27.259907  470544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:03:27.277799  470544 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:03:27.378323  470544 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1014 20:03:27.383519  470544 start.go:128] duration metric: took 10.056234444s to createHost
	I1014 20:03:27.383548  470544 start.go:83] releasing machines lock for "ha-579393", held for 10.056356237s
	I1014 20:03:27.383629  470544 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-579393
	I1014 20:03:27.401699  470544 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 20:03:27.401709  470544 ssh_runner.go:195] Run: cat /version.json
	I1014 20:03:27.401815  470544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:03:27.401838  470544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:03:27.420176  470544 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:03:27.421057  470544 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:03:27.574708  470544 ssh_runner.go:195] Run: systemctl --version
	I1014 20:03:27.581776  470544 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 20:03:27.618049  470544 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 20:03:27.622981  470544 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 20:03:27.623059  470544 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 20:03:27.650696  470544 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1014 20:03:27.650726  470544 start.go:495] detecting cgroup driver to use...
	I1014 20:03:27.650795  470544 detect.go:190] detected "systemd" cgroup driver on host os
	I1014 20:03:27.650860  470544 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 20:03:27.668397  470544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 20:03:27.681391  470544 docker.go:218] disabling cri-docker service (if available) ...
	I1014 20:03:27.681446  470544 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 20:03:27.698246  470544 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 20:03:27.716479  470544 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 20:03:27.798818  470544 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 20:03:27.884317  470544 docker.go:234] disabling docker service ...
	I1014 20:03:27.884384  470544 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 20:03:27.905126  470544 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 20:03:27.918827  470544 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 20:03:28.002081  470544 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 20:03:28.084842  470544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 20:03:28.098220  470544 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 20:03:28.113305  470544 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1014 20:03:28.113364  470544 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:03:28.124477  470544 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1014 20:03:28.124559  470544 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:03:28.134261  470544 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:03:28.144071  470544 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:03:28.154359  470544 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 20:03:28.163636  470544 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:03:28.173644  470544 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:03:28.188326  470544 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:03:28.198228  470544 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 20:03:28.206234  470544 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 20:03:28.214019  470544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 20:03:28.295010  470544 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 20:03:28.401206  470544 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 20:03:28.401272  470544 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 20:03:28.405522  470544 start.go:563] Will wait 60s for crictl version
	I1014 20:03:28.405585  470544 ssh_runner.go:195] Run: which crictl
	I1014 20:03:28.409373  470544 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1014 20:03:28.435266  470544 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1014 20:03:28.435335  470544 ssh_runner.go:195] Run: crio --version
	I1014 20:03:28.465834  470544 ssh_runner.go:195] Run: crio --version
	I1014 20:03:28.497274  470544 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1014 20:03:28.498593  470544 cli_runner.go:164] Run: docker network inspect ha-579393 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1014 20:03:28.517029  470544 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1014 20:03:28.521498  470544 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 20:03:28.532817  470544 kubeadm.go:883] updating cluster {Name:ha-579393 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-579393 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 20:03:28.532940  470544 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 20:03:28.532992  470544 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 20:03:28.565925  470544 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 20:03:28.565951  470544 crio.go:433] Images already preloaded, skipping extraction
	I1014 20:03:28.566006  470544 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 20:03:28.592978  470544 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 20:03:28.593003  470544 cache_images.go:85] Images are preloaded, skipping loading
	I1014 20:03:28.593011  470544 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1014 20:03:28.593109  470544 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-579393 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-579393 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 20:03:28.593172  470544 ssh_runner.go:195] Run: crio config
	I1014 20:03:28.638570  470544 cni.go:84] Creating CNI manager for ""
	I1014 20:03:28.638590  470544 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1014 20:03:28.638604  470544 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1014 20:03:28.638626  470544 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-579393 NodeName:ha-579393 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1014 20:03:28.638736  470544 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-579393"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 20:03:28.638778  470544 kube-vip.go:115] generating kube-vip config ...
	I1014 20:03:28.638827  470544 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1014 20:03:28.651221  470544 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1014 20:03:28.651322  470544 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1014 20:03:28.651371  470544 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1014 20:03:28.659733  470544 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 20:03:28.659825  470544 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1014 20:03:28.667977  470544 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1014 20:03:28.681172  470544 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 20:03:28.697239  470544 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1014 20:03:28.710080  470544 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I1014 20:03:28.724688  470544 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1014 20:03:28.728568  470544 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 20:03:28.738656  470544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 20:03:28.817749  470544 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 20:03:28.841528  470544 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393 for IP: 192.168.49.2
	I1014 20:03:28.841566  470544 certs.go:195] generating shared ca certs ...
	I1014 20:03:28.841587  470544 certs.go:227] acquiring lock for ca certs: {Name:mk332cc83f1e1747534fc81e896bed94f24941d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:03:28.841727  470544 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.key
	I1014 20:03:28.841805  470544 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.key
	I1014 20:03:28.841821  470544 certs.go:257] generating profile certs ...
	I1014 20:03:28.841874  470544 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/client.key
	I1014 20:03:28.841897  470544 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/client.crt with IP's: []
	I1014 20:03:29.018063  470544 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/client.crt ...
	I1014 20:03:29.018101  470544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/client.crt: {Name:mk8b90bc05b294b6c05e808012d45472c3093f67 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:03:29.018299  470544 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/client.key ...
	I1014 20:03:29.018321  470544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/client.key: {Name:mk4670db425ebf46f3bf4968573343a975480683 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:03:29.018407  470544 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key.6d4a1612
	I1014 20:03:29.018424  470544 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt.6d4a1612 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I1014 20:03:29.208082  470544 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt.6d4a1612 ...
	I1014 20:03:29.208118  470544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt.6d4a1612: {Name:mk2e48e06bd7a0fd2aa3ea9def795ac03bded956 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:03:29.208287  470544 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key.6d4a1612 ...
	I1014 20:03:29.208300  470544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key.6d4a1612: {Name:mkc6fe9b4a3330b4fa61a71beeb137e948294421 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:03:29.209199  470544 certs.go:382] copying /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt.6d4a1612 -> /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt
	I1014 20:03:29.209315  470544 certs.go:386] copying /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key.6d4a1612 -> /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key
	I1014 20:03:29.209373  470544 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.key
	I1014 20:03:29.209389  470544 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.crt with IP's: []
	I1014 20:03:29.349734  470544 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.crt ...
	I1014 20:03:29.349788  470544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.crt: {Name:mk3c38e66fa21f9bf9f031b0b611fbb1d8c4882a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:03:29.349962  470544 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.key ...
	I1014 20:03:29.349973  470544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.key: {Name:mke28e4de33c7a0d50feb0b1335c5cd9e94d81db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:03:29.350047  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1014 20:03:29.350064  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1014 20:03:29.350075  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1014 20:03:29.350087  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1014 20:03:29.350099  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1014 20:03:29.350109  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1014 20:03:29.350122  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1014 20:03:29.350132  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1014 20:03:29.350183  470544 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/417373.pem (1338 bytes)
	W1014 20:03:29.350228  470544 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-413763/.minikube/certs/417373_empty.pem, impossibly tiny 0 bytes
	I1014 20:03:29.350237  470544 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca-key.pem (1675 bytes)
	I1014 20:03:29.350258  470544 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem (1078 bytes)
	I1014 20:03:29.350280  470544 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem (1123 bytes)
	I1014 20:03:29.350300  470544 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem (1675 bytes)
	I1014 20:03:29.350336  470544 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem (1708 bytes)
	I1014 20:03:29.350360  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:03:29.350373  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/417373.pem -> /usr/share/ca-certificates/417373.pem
	I1014 20:03:29.350387  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem -> /usr/share/ca-certificates/4173732.pem
	I1014 20:03:29.350927  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 20:03:29.369482  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1014 20:03:29.386797  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 20:03:29.404413  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1014 20:03:29.421955  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1014 20:03:29.439808  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1014 20:03:29.457222  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 20:03:29.475143  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1014 20:03:29.493300  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 20:03:29.513957  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/certs/417373.pem --> /usr/share/ca-certificates/417373.pem (1338 bytes)
	I1014 20:03:29.535163  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem --> /usr/share/ca-certificates/4173732.pem (1708 bytes)
	I1014 20:03:29.554358  470544 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 20:03:29.567671  470544 ssh_runner.go:195] Run: openssl version
	I1014 20:03:29.574116  470544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4173732.pem && ln -fs /usr/share/ca-certificates/4173732.pem /etc/ssl/certs/4173732.pem"
	I1014 20:03:29.582980  470544 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4173732.pem
	I1014 20:03:29.586713  470544 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 19:32 /usr/share/ca-certificates/4173732.pem
	I1014 20:03:29.586836  470544 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4173732.pem
	I1014 20:03:29.620973  470544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4173732.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 20:03:29.629990  470544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 20:03:29.638580  470544 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:03:29.642541  470544 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 19:15 /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:03:29.642595  470544 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:03:29.677097  470544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 20:03:29.687002  470544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/417373.pem && ln -fs /usr/share/ca-certificates/417373.pem /etc/ssl/certs/417373.pem"
	I1014 20:03:29.696267  470544 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/417373.pem
	I1014 20:03:29.700535  470544 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 19:32 /usr/share/ca-certificates/417373.pem
	I1014 20:03:29.700593  470544 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/417373.pem
	I1014 20:03:29.734895  470544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/417373.pem /etc/ssl/certs/51391683.0"
	I1014 20:03:29.744295  470544 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 20:03:29.748240  470544 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1014 20:03:29.748305  470544 kubeadm.go:400] StartCluster: {Name:ha-579393 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-579393 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 20:03:29.748380  470544 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 20:03:29.748448  470544 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 20:03:29.777054  470544 cri.go:89] found id: ""
	I1014 20:03:29.777134  470544 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 20:03:29.785507  470544 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 20:03:29.793651  470544 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1014 20:03:29.793711  470544 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 20:03:29.801881  470544 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 20:03:29.801906  470544 kubeadm.go:157] found existing configuration files:
	
	I1014 20:03:29.801956  470544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 20:03:29.809948  470544 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 20:03:29.810011  470544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 20:03:29.817979  470544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 20:03:29.825985  470544 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 20:03:29.826064  470544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 20:03:29.833833  470544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 20:03:29.842078  470544 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 20:03:29.842149  470544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 20:03:29.850122  470544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 20:03:29.858250  470544 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 20:03:29.858312  470544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 20:03:29.866004  470544 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1014 20:03:29.905901  470544 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1014 20:03:29.906013  470544 kubeadm.go:318] [preflight] Running pre-flight checks
	I1014 20:03:29.928412  470544 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1014 20:03:29.928498  470544 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1014 20:03:29.928541  470544 kubeadm.go:318] OS: Linux
	I1014 20:03:29.928583  470544 kubeadm.go:318] CGROUPS_CPU: enabled
	I1014 20:03:29.928652  470544 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1014 20:03:29.928730  470544 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1014 20:03:29.928805  470544 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1014 20:03:29.928849  470544 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1014 20:03:29.928892  470544 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1014 20:03:29.928935  470544 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1014 20:03:29.928973  470544 kubeadm.go:318] CGROUPS_IO: enabled
	I1014 20:03:29.989181  470544 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1014 20:03:29.989342  470544 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1014 20:03:29.989457  470544 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1014 20:03:29.997476  470544 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1014 20:03:30.000428  470544 out.go:252]   - Generating certificates and keys ...
	I1014 20:03:30.000531  470544 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1014 20:03:30.000656  470544 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1014 20:03:30.367367  470544 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1014 20:03:30.888441  470544 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1014 20:03:31.416284  470544 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1014 20:03:31.486302  470544 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1014 20:03:32.293304  470544 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1014 20:03:32.293457  470544 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [ha-579393 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1014 20:03:32.436942  470544 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1014 20:03:32.437134  470544 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [ha-579393 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1014 20:03:32.740861  470544 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1014 20:03:32.874202  470544 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1014 20:03:33.330864  470544 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1014 20:03:33.330961  470544 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1014 20:03:33.434687  470544 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1014 20:03:33.590351  470544 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1014 20:03:33.928031  470544 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1014 20:03:34.042691  470544 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1014 20:03:34.576186  470544 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1014 20:03:34.576637  470544 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1014 20:03:34.579016  470544 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1014 20:03:34.581440  470544 out.go:252]   - Booting up control plane ...
	I1014 20:03:34.581593  470544 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1014 20:03:34.581712  470544 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1014 20:03:34.581832  470544 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1014 20:03:34.595204  470544 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1014 20:03:34.595404  470544 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1014 20:03:34.601919  470544 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1014 20:03:34.602142  470544 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1014 20:03:34.602243  470544 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1014 20:03:34.699612  470544 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1014 20:03:34.699737  470544 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1014 20:03:35.200483  470544 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 500.970561ms
	I1014 20:03:35.205501  470544 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1014 20:03:35.205636  470544 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1014 20:03:35.205873  470544 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1014 20:03:35.205987  470544 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1014 20:07:35.206930  470544 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000676337s
	I1014 20:07:35.207172  470544 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000720088s
	I1014 20:07:35.207359  470544 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000743417s
	I1014 20:07:35.207371  470544 kubeadm.go:318] 
	I1014 20:07:35.207694  470544 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1014 20:07:35.208049  470544 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1014 20:07:35.208276  470544 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1014 20:07:35.208532  470544 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1014 20:07:35.208786  470544 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1014 20:07:35.209074  470544 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1014 20:07:35.209100  470544 kubeadm.go:318] 
	I1014 20:07:35.211976  470544 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1014 20:07:35.212145  470544 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1014 20:07:35.212706  470544 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1014 20:07:35.212843  470544 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	W1014 20:07:35.212972  470544 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-579393 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-579393 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 500.970561ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000676337s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000720088s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000743417s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1014 20:07:35.213050  470544 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1014 20:07:37.966951  470544 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.753873381s)
	I1014 20:07:37.967030  470544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 20:07:37.980538  470544 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1014 20:07:37.980613  470544 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 20:07:37.988822  470544 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 20:07:37.988844  470544 kubeadm.go:157] found existing configuration files:
	
	I1014 20:07:37.988897  470544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 20:07:37.996970  470544 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 20:07:37.997051  470544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 20:07:38.004797  470544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 20:07:38.012635  470544 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 20:07:38.012702  470544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 20:07:38.020175  470544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 20:07:38.028386  470544 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 20:07:38.028440  470544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 20:07:38.036154  470544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 20:07:38.044027  470544 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 20:07:38.044088  470544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 20:07:38.051422  470544 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1014 20:07:38.110505  470544 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1014 20:07:38.170186  470544 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1014 20:11:40.721242  470544 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1014 20:11:40.721491  470544 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1014 20:11:40.724650  470544 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1014 20:11:40.724789  470544 kubeadm.go:318] [preflight] Running pre-flight checks
	I1014 20:11:40.724937  470544 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1014 20:11:40.725018  470544 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1014 20:11:40.725068  470544 kubeadm.go:318] OS: Linux
	I1014 20:11:40.725125  470544 kubeadm.go:318] CGROUPS_CPU: enabled
	I1014 20:11:40.725181  470544 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1014 20:11:40.725248  470544 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1014 20:11:40.725310  470544 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1014 20:11:40.725365  470544 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1014 20:11:40.725423  470544 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1014 20:11:40.725473  470544 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1014 20:11:40.725534  470544 kubeadm.go:318] CGROUPS_IO: enabled
	I1014 20:11:40.725639  470544 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1014 20:11:40.725782  470544 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1014 20:11:40.725977  470544 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1014 20:11:40.726087  470544 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1014 20:11:40.728584  470544 out.go:252]   - Generating certificates and keys ...
	I1014 20:11:40.728668  470544 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1014 20:11:40.728723  470544 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1014 20:11:40.728820  470544 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1014 20:11:40.728895  470544 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1014 20:11:40.728974  470544 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1014 20:11:40.729051  470544 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1014 20:11:40.729150  470544 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1014 20:11:40.729214  470544 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1014 20:11:40.729282  470544 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1014 20:11:40.729340  470544 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1014 20:11:40.729378  470544 kubeadm.go:318] [certs] Using the existing "sa" key
	I1014 20:11:40.729422  470544 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1014 20:11:40.729466  470544 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1014 20:11:40.729531  470544 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1014 20:11:40.729604  470544 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1014 20:11:40.729710  470544 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1014 20:11:40.729805  470544 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1014 20:11:40.729913  470544 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1014 20:11:40.730020  470544 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1014 20:11:40.731279  470544 out.go:252]   - Booting up control plane ...
	I1014 20:11:40.731376  470544 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1014 20:11:40.731472  470544 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1014 20:11:40.731563  470544 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1014 20:11:40.731676  470544 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1014 20:11:40.731820  470544 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1014 20:11:40.731960  470544 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1014 20:11:40.732060  470544 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1014 20:11:40.732099  470544 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1014 20:11:40.732241  470544 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1014 20:11:40.732368  470544 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1014 20:11:40.732459  470544 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001170855s
	I1014 20:11:40.732550  470544 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1014 20:11:40.732649  470544 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1014 20:11:40.732789  470544 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1014 20:11:40.732875  470544 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1014 20:11:40.732961  470544 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000981646s
	I1014 20:11:40.733076  470544 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001242346s
	I1014 20:11:40.733142  470544 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001205382s
	I1014 20:11:40.733157  470544 kubeadm.go:318] 
	I1014 20:11:40.733272  470544 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1014 20:11:40.733349  470544 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1014 20:11:40.733417  470544 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1014 20:11:40.733491  470544 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1014 20:11:40.733553  470544 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1014 20:11:40.733641  470544 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1014 20:11:40.733684  470544 kubeadm.go:318] 
	I1014 20:11:40.733748  470544 kubeadm.go:402] duration metric: took 8m10.985445817s to StartCluster
	I1014 20:11:40.733824  470544 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 20:11:40.733881  470544 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 20:11:40.762474  470544 cri.go:89] found id: ""
	I1014 20:11:40.762524  470544 logs.go:282] 0 containers: []
	W1014 20:11:40.762538  470544 logs.go:284] No container was found matching "kube-apiserver"
	I1014 20:11:40.762545  470544 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 20:11:40.762602  470544 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 20:11:40.789961  470544 cri.go:89] found id: ""
	I1014 20:11:40.789989  470544 logs.go:282] 0 containers: []
	W1014 20:11:40.789999  470544 logs.go:284] No container was found matching "etcd"
	I1014 20:11:40.790007  470544 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 20:11:40.790062  470544 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 20:11:40.817095  470544 cri.go:89] found id: ""
	I1014 20:11:40.817128  470544 logs.go:282] 0 containers: []
	W1014 20:11:40.817141  470544 logs.go:284] No container was found matching "coredns"
	I1014 20:11:40.817148  470544 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 20:11:40.817206  470544 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 20:11:40.843942  470544 cri.go:89] found id: ""
	I1014 20:11:40.843974  470544 logs.go:282] 0 containers: []
	W1014 20:11:40.843984  470544 logs.go:284] No container was found matching "kube-scheduler"
	I1014 20:11:40.843991  470544 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 20:11:40.844054  470544 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 20:11:40.870262  470544 cri.go:89] found id: ""
	I1014 20:11:40.870289  470544 logs.go:282] 0 containers: []
	W1014 20:11:40.870299  470544 logs.go:284] No container was found matching "kube-proxy"
	I1014 20:11:40.870308  470544 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 20:11:40.870377  470544 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 20:11:40.896558  470544 cri.go:89] found id: ""
	I1014 20:11:40.896588  470544 logs.go:282] 0 containers: []
	W1014 20:11:40.896597  470544 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 20:11:40.896604  470544 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 20:11:40.896660  470544 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 20:11:40.923171  470544 cri.go:89] found id: ""
	I1014 20:11:40.923202  470544 logs.go:282] 0 containers: []
	W1014 20:11:40.923214  470544 logs.go:284] No container was found matching "kindnet"
	I1014 20:11:40.923225  470544 logs.go:123] Gathering logs for kubelet ...
	I1014 20:11:40.923237  470544 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 20:11:40.991897  470544 logs.go:123] Gathering logs for dmesg ...
	I1014 20:11:40.991944  470544 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 20:11:41.010371  470544 logs.go:123] Gathering logs for describe nodes ...
	I1014 20:11:41.010404  470544 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 20:11:41.071387  470544 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 20:11:41.064404    2582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:11:41.064977    2582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:11:41.066171    2582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:11:41.066648    2582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:11:41.068287    2582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 20:11:41.064404    2582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:11:41.064977    2582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:11:41.066171    2582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:11:41.066648    2582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:11:41.068287    2582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 20:11:41.071407  470544 logs.go:123] Gathering logs for CRI-O ...
	I1014 20:11:41.071419  470544 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 20:11:41.133347  470544 logs.go:123] Gathering logs for container status ...
	I1014 20:11:41.133392  470544 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1014 20:11:41.166639  470544 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001170855s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000981646s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001242346s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001205382s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1014 20:11:41.166697  470544 out.go:285] * 
	W1014 20:11:41.166793  470544 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001170855s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000981646s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001242346s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001205382s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1014 20:11:41.166813  470544 out.go:285] * 
	W1014 20:11:41.168436  470544 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 20:11:41.172303  470544 out.go:203] 
	W1014 20:11:41.173765  470544 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001170855s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000981646s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001242346s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001205382s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1014 20:11:41.173801  470544 out.go:285] * 
	I1014 20:11:41.176311  470544 out.go:203] 
	
	
	==> CRI-O <==
	Oct 14 20:12:57 ha-579393 crio[778]: time="2025-10-14T20:12:57.48351067Z" level=info msg="createCtr: deleting container 1de87d6b72a75dded6420210657397b777ace5cdc49362907789517ac1f87bd8 from storage" id=0e15fee7-fb21-4c78-b7c1-f4cdd6b3ab37 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:12:57 ha-579393 crio[778]: time="2025-10-14T20:12:57.487098353Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-579393_kube-system_514451ea1eb9e52e24cc36daace2ea4a_0" id=ba9a3535-5c46-4278-8b81-cf99a5fce5ee name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:12:57 ha-579393 crio[778]: time="2025-10-14T20:12:57.487389405Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-ha-579393_kube-system_06869f9c490a5fffb50940ac23939d18_0" id=0e15fee7-fb21-4c78-b7c1-f4cdd6b3ab37 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:13:06 ha-579393 crio[778]: time="2025-10-14T20:13:06.453280312Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=24d6610c-9e29-400c-8183-0b358ec1be79 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 20:13:06 ha-579393 crio[778]: time="2025-10-14T20:13:06.454302065Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=d3b7311a-885d-46cc-95e1-dd42b99ac5d9 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 20:13:06 ha-579393 crio[778]: time="2025-10-14T20:13:06.455237064Z" level=info msg="Creating container: kube-system/etcd-ha-579393/etcd" id=51c2ef18-2a57-46d7-ad18-f91666afadb6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:13:06 ha-579393 crio[778]: time="2025-10-14T20:13:06.455462801Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:13:06 ha-579393 crio[778]: time="2025-10-14T20:13:06.459510932Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:13:06 ha-579393 crio[778]: time="2025-10-14T20:13:06.46003743Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:13:06 ha-579393 crio[778]: time="2025-10-14T20:13:06.475532461Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=51c2ef18-2a57-46d7-ad18-f91666afadb6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:13:06 ha-579393 crio[778]: time="2025-10-14T20:13:06.477100933Z" level=info msg="createCtr: deleting container ID 667c339cd88f614ede8bc1aa048d0e845431c4dd40ce2b1070ed8f24bbd20700 from idIndex" id=51c2ef18-2a57-46d7-ad18-f91666afadb6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:13:06 ha-579393 crio[778]: time="2025-10-14T20:13:06.477143128Z" level=info msg="createCtr: removing container 667c339cd88f614ede8bc1aa048d0e845431c4dd40ce2b1070ed8f24bbd20700" id=51c2ef18-2a57-46d7-ad18-f91666afadb6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:13:06 ha-579393 crio[778]: time="2025-10-14T20:13:06.477176634Z" level=info msg="createCtr: deleting container 667c339cd88f614ede8bc1aa048d0e845431c4dd40ce2b1070ed8f24bbd20700 from storage" id=51c2ef18-2a57-46d7-ad18-f91666afadb6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:13:06 ha-579393 crio[778]: time="2025-10-14T20:13:06.47952167Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-579393_kube-system_949fee8892a6b2444a3aa0dec92a7837_0" id=51c2ef18-2a57-46d7-ad18-f91666afadb6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:13:08 ha-579393 crio[778]: time="2025-10-14T20:13:08.453245789Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=0fbd4de1-e489-481b-bcb8-b2b9fb8155bf name=/runtime.v1.ImageService/ImageStatus
	Oct 14 20:13:08 ha-579393 crio[778]: time="2025-10-14T20:13:08.454296391Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=4feb33ed-6e64-4972-9d4d-f0eb2d673fcb name=/runtime.v1.ImageService/ImageStatus
	Oct 14 20:13:08 ha-579393 crio[778]: time="2025-10-14T20:13:08.4552917Z" level=info msg="Creating container: kube-system/kube-apiserver-ha-579393/kube-apiserver" id=c2685434-8e39-490a-9a29-67cc288e8fd0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:13:08 ha-579393 crio[778]: time="2025-10-14T20:13:08.455563034Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:13:08 ha-579393 crio[778]: time="2025-10-14T20:13:08.460609715Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:13:08 ha-579393 crio[778]: time="2025-10-14T20:13:08.4612762Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:13:08 ha-579393 crio[778]: time="2025-10-14T20:13:08.478393111Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=c2685434-8e39-490a-9a29-67cc288e8fd0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:13:08 ha-579393 crio[778]: time="2025-10-14T20:13:08.479980828Z" level=info msg="createCtr: deleting container ID 018575602c57ffad52b1c85d2a85ab388b8490a6071902e6c324528074a19f73 from idIndex" id=c2685434-8e39-490a-9a29-67cc288e8fd0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:13:08 ha-579393 crio[778]: time="2025-10-14T20:13:08.480033374Z" level=info msg="createCtr: removing container 018575602c57ffad52b1c85d2a85ab388b8490a6071902e6c324528074a19f73" id=c2685434-8e39-490a-9a29-67cc288e8fd0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:13:08 ha-579393 crio[778]: time="2025-10-14T20:13:08.480077995Z" level=info msg="createCtr: deleting container 018575602c57ffad52b1c85d2a85ab388b8490a6071902e6c324528074a19f73 from storage" id=c2685434-8e39-490a-9a29-67cc288e8fd0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:13:08 ha-579393 crio[778]: time="2025-10-14T20:13:08.482906147Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-ha-579393_kube-system_06869f9c490a5fffb50940ac23939d18_0" id=c2685434-8e39-490a-9a29-67cc288e8fd0 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 20:13:09.270974    3865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:13:09.271549    3865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:13:09.273162    3865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:13:09.273534    3865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:13:09.275118    3865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 91 42 a8 46 f5 08 06
	[ +33.952077] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff aa 17 60 38 7c 63 08 06
	[  +0.961658] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ba 77 81 98 05 a1 08 06
	[  +0.043334] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 8a 2f 57 55 4c d7 08 06
	[  +4.681081] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f6 ac 51 a4 b2 5a 08 06
	[Oct14 19:04] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e6 d1 de e1 85 25 08 06
	[  +0.899784] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ae f2 fe 51 3d 95 08 06
	[  +0.040749] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 82 c6 36 89 b8 4a 08 06
	[  +4.162603] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c6 62 d5 5c 0e ef 08 06
	[ +30.801371] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 06 ae e5 28 c7 39 08 06
	[  +0.817897] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ae 5e 5e 90 62 1a 08 06
	[  +0.046959] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 96 8f 1b bd b9 9f 08 06
	[Oct14 19:05] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 56 59 c3 7c db c4 08 06
	
	
	==> kernel <==
	 20:13:09 up  2:55,  0 user,  load average: 0.22, 0.09, 0.51
	Linux ha-579393 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 14 20:13:00 ha-579393 kubelet[1963]: E1014 20:13:00.473717    1963 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-579393\" not found"
	Oct 14 20:13:00 ha-579393 kubelet[1963]: E1014 20:13:00.508888    1963 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://192.168.49.2:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass"
	Oct 14 20:13:01 ha-579393 kubelet[1963]: E1014 20:13:01.091624    1963 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-579393?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 14 20:13:01 ha-579393 kubelet[1963]: I1014 20:13:01.263563    1963 kubelet_node_status.go:75] "Attempting to register node" node="ha-579393"
	Oct 14 20:13:01 ha-579393 kubelet[1963]: E1014 20:13:01.263995    1963 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-579393"
	Oct 14 20:13:06 ha-579393 kubelet[1963]: E1014 20:13:06.452640    1963 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-579393\" not found" node="ha-579393"
	Oct 14 20:13:06 ha-579393 kubelet[1963]: E1014 20:13:06.479888    1963 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 14 20:13:06 ha-579393 kubelet[1963]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 14 20:13:06 ha-579393 kubelet[1963]:  > podSandboxID="41ac2f349da00920582806a729366af02d901203fe089532947fdee2d8b61fa0"
	Oct 14 20:13:06 ha-579393 kubelet[1963]: E1014 20:13:06.480004    1963 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 14 20:13:06 ha-579393 kubelet[1963]:         container etcd start failed in pod etcd-ha-579393_kube-system(949fee8892a6b2444a3aa0dec92a7837): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 14 20:13:06 ha-579393 kubelet[1963]:  > logger="UnhandledError"
	Oct 14 20:13:06 ha-579393 kubelet[1963]: E1014 20:13:06.480035    1963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-ha-579393" podUID="949fee8892a6b2444a3aa0dec92a7837"
	Oct 14 20:13:08 ha-579393 kubelet[1963]: E1014 20:13:08.093210    1963 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-579393?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 14 20:13:08 ha-579393 kubelet[1963]: I1014 20:13:08.265061    1963 kubelet_node_status.go:75] "Attempting to register node" node="ha-579393"
	Oct 14 20:13:08 ha-579393 kubelet[1963]: E1014 20:13:08.265431    1963 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-579393"
	Oct 14 20:13:08 ha-579393 kubelet[1963]: E1014 20:13:08.452709    1963 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-579393\" not found" node="ha-579393"
	Oct 14 20:13:08 ha-579393 kubelet[1963]: E1014 20:13:08.483283    1963 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 14 20:13:08 ha-579393 kubelet[1963]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 14 20:13:08 ha-579393 kubelet[1963]:  > podSandboxID="26eabc9a05c338cff1ebd4ea1b580692dcb1accc6b0e23f61f6a228d1f73adce"
	Oct 14 20:13:08 ha-579393 kubelet[1963]: E1014 20:13:08.483427    1963 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 14 20:13:08 ha-579393 kubelet[1963]:         container kube-apiserver start failed in pod kube-apiserver-ha-579393_kube-system(06869f9c490a5fffb50940ac23939d18): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 14 20:13:08 ha-579393 kubelet[1963]:  > logger="UnhandledError"
	Oct 14 20:13:08 ha-579393 kubelet[1963]: E1014 20:13:08.483475    1963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-ha-579393" podUID="06869f9c490a5fffb50940ac23939d18"
	Oct 14 20:13:08 ha-579393 kubelet[1963]: E1014 20:13:08.641737    1963 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-579393.186e746019ae0b94  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-579393,UID:ha-579393,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-579393 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-579393,},FirstTimestamp:2025-10-14 20:07:40.444961684 +0000 UTC m=+0.732526825,LastTimestamp:2025-10-14 20:07:40.444961684 +0000 UTC m=+0.732526825,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-579393,}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-579393 -n ha-579393
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-579393 -n ha-579393: exit status 6 (307.750088ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1014 20:13:09.670617  479955 status.go:458] kubeconfig endpoint: get endpoint: "ha-579393" does not appear in /home/jenkins/minikube-integration/21409-413763/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "ha-579393" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/CopyFile (1.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (1.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-579393 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-579393 node stop m02 --alsologtostderr -v 5: exit status 85 (65.246999ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 20:13:09.730279  480071 out.go:360] Setting OutFile to fd 1 ...
	I1014 20:13:09.731060  480071 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:13:09.731082  480071 out.go:374] Setting ErrFile to fd 2...
	I1014 20:13:09.731090  480071 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:13:09.731604  480071 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-413763/.minikube/bin
	I1014 20:13:09.732352  480071 mustload.go:65] Loading cluster: ha-579393
	I1014 20:13:09.732732  480071 config.go:182] Loaded profile config "ha-579393": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:13:09.734644  480071 out.go:203] 
	W1014 20:13:09.736258  480071 out.go:285] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W1014 20:13:09.736274  480071 out.go:285] * 
	* 
	W1014 20:13:09.745770  480071 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 20:13:09.747304  480071 out.go:203] 

                                                
                                                
** /stderr **
ha_test.go:367: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-579393 node stop m02 --alsologtostderr -v 5": exit status 85
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-579393 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-579393 status --alsologtostderr -v 5: exit status 6 (305.03385ms)

                                                
                                                
-- stdout --
	ha-579393
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 20:13:09.799683  480082 out.go:360] Setting OutFile to fd 1 ...
	I1014 20:13:09.799982  480082 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:13:09.799993  480082 out.go:374] Setting ErrFile to fd 2...
	I1014 20:13:09.799997  480082 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:13:09.800207  480082 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-413763/.minikube/bin
	I1014 20:13:09.800392  480082 out.go:368] Setting JSON to false
	I1014 20:13:09.800423  480082 mustload.go:65] Loading cluster: ha-579393
	I1014 20:13:09.800472  480082 notify.go:220] Checking for updates...
	I1014 20:13:09.800780  480082 config.go:182] Loaded profile config "ha-579393": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:13:09.800796  480082 status.go:174] checking status of ha-579393 ...
	I1014 20:13:09.801235  480082 cli_runner.go:164] Run: docker container inspect ha-579393 --format={{.State.Status}}
	I1014 20:13:09.819749  480082 status.go:371] ha-579393 host status = "Running" (err=<nil>)
	I1014 20:13:09.819813  480082 host.go:66] Checking if "ha-579393" exists ...
	I1014 20:13:09.820089  480082 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-579393
	I1014 20:13:09.838457  480082 host.go:66] Checking if "ha-579393" exists ...
	I1014 20:13:09.838798  480082 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1014 20:13:09.838865  480082 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:13:09.860144  480082 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:13:09.961557  480082 ssh_runner.go:195] Run: systemctl --version
	I1014 20:13:09.968564  480082 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 20:13:09.981884  480082 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 20:13:10.041170  480082 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-14 20:13:10.030310778 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	E1014 20:13:10.041778  480082 status.go:458] kubeconfig endpoint: get endpoint: "ha-579393" does not appear in /home/jenkins/minikube-integration/21409-413763/kubeconfig
	I1014 20:13:10.041815  480082 api_server.go:166] Checking apiserver status ...
	I1014 20:13:10.041859  480082 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1014 20:13:10.052551  480082 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1014 20:13:10.052580  480082 status.go:463] ha-579393 apiserver status = Running (err=<nil>)
	I1014 20:13:10.052597  480082 status.go:176] ha-579393 status: &{Name:ha-579393 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:374: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-579393 status --alsologtostderr -v 5" : exit status 6
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-579393
helpers_test.go:243: (dbg) docker inspect ha-579393:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2",
	        "Created": "2025-10-14T20:03:22.416166993Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 471115,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-14T20:03:22.453114626Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2/hostname",
	        "HostsPath": "/var/lib/docker/containers/e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2/hosts",
	        "LogPath": "/var/lib/docker/containers/e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2/e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2-json.log",
	        "Name": "/ha-579393",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-579393:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-579393",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2",
	                "LowerDir": "/var/lib/docker/overlay2/f2862f7f555e5163c5fb738e59e646f87e5b6f6dc38621bbd5cf8a3ccf2dc40d-init/diff:/var/lib/docker/overlay2/51cca15559205c73df9571f03495ed8a29f085405673af5fef2c1ba7c695d8af/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f2862f7f555e5163c5fb738e59e646f87e5b6f6dc38621bbd5cf8a3ccf2dc40d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f2862f7f555e5163c5fb738e59e646f87e5b6f6dc38621bbd5cf8a3ccf2dc40d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f2862f7f555e5163c5fb738e59e646f87e5b6f6dc38621bbd5cf8a3ccf2dc40d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-579393",
	                "Source": "/var/lib/docker/volumes/ha-579393/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-579393",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-579393",
	                "name.minikube.sigs.k8s.io": "ha-579393",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a1c2b52fae2ff440ed705eb97dd81ba6bb6415972c195c1ca3bec92d8e7f50f0",
	            "SandboxKey": "/var/run/docker/netns/a1c2b52fae2f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32903"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32904"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32907"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32905"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32906"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-579393": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "92:ce:80:cd:a9:2b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d686e445c44d566d468791372c74214f3cfc87adaf2fbedd68822ff7d3ce8955",
	                    "EndpointID": "9e96e7d478fc0073b7c8e78f8945763db207596a9030627a1780b04c90be2b93",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-579393",
	                        "e999454f4483"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-579393 -n ha-579393
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-579393 -n ha-579393: exit status 6 (320.313177ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1014 20:13:10.381300  480215 status.go:458] kubeconfig endpoint: get endpoint: "ha-579393" does not appear in /home/jenkins/minikube-integration/21409-413763/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-579393 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                      ARGS                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ update-context │ functional-744288 update-context --alsologtostderr -v=2                                                         │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ image          │ functional-744288 image build -t localhost/my-image:functional-744288 testdata/build --alsologtostderr          │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ update-context │ functional-744288 update-context --alsologtostderr -v=2                                                         │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ image          │ functional-744288 image ls                                                                                      │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ delete         │ -p functional-744288                                                                                            │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 20:03 UTC │ 14 Oct 25 20:03 UTC │
	│ start          │ ha-579393 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:03 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml                                                │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- rollout status deployment/busybox                                                          │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:12 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:12 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:12 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- exec  -- nslookup kubernetes.io                                                            │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- exec  -- nslookup kubernetes.default                                                       │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                                     │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ node           │ ha-579393 node add --alsologtostderr -v 5                                                                       │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ node           │ ha-579393 node stop m02 --alsologtostderr -v 5                                                                  │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/14 20:03:17
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1014 20:03:17.125360  470544 out.go:360] Setting OutFile to fd 1 ...
	I1014 20:03:17.125666  470544 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:03:17.125678  470544 out.go:374] Setting ErrFile to fd 2...
	I1014 20:03:17.125685  470544 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:03:17.125940  470544 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-413763/.minikube/bin
	I1014 20:03:17.126490  470544 out.go:368] Setting JSON to false
	I1014 20:03:17.127467  470544 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":9943,"bootTime":1760462254,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1014 20:03:17.127588  470544 start.go:141] virtualization: kvm guest
	I1014 20:03:17.129767  470544 out.go:179] * [ha-579393] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1014 20:03:17.131241  470544 out.go:179]   - MINIKUBE_LOCATION=21409
	I1014 20:03:17.131264  470544 notify.go:220] Checking for updates...
	I1014 20:03:17.134306  470544 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 20:03:17.135806  470544 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-413763/kubeconfig
	I1014 20:03:17.137119  470544 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-413763/.minikube
	I1014 20:03:17.138379  470544 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1014 20:03:17.140082  470544 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 20:03:17.141662  470544 driver.go:421] Setting default libvirt URI to qemu:///system
	I1014 20:03:17.165916  470544 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1014 20:03:17.166098  470544 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 20:03:17.229548  470544 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-14 20:03:17.218250431 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1014 20:03:17.229650  470544 docker.go:318] overlay module found
	I1014 20:03:17.231449  470544 out.go:179] * Using the docker driver based on user configuration
	I1014 20:03:17.232741  470544 start.go:305] selected driver: docker
	I1014 20:03:17.232773  470544 start.go:925] validating driver "docker" against <nil>
	I1014 20:03:17.232790  470544 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 20:03:17.233313  470544 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 20:03:17.295257  470544 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-14 20:03:17.284941769 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1014 20:03:17.295445  470544 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1014 20:03:17.295657  470544 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 20:03:17.297506  470544 out.go:179] * Using Docker driver with root privileges
	I1014 20:03:17.298873  470544 cni.go:84] Creating CNI manager for ""
	I1014 20:03:17.298932  470544 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1014 20:03:17.298947  470544 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1014 20:03:17.299040  470544 start.go:349] cluster config:
	{Name:ha-579393 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-579393 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPaus
eInterval:1m0s}
	I1014 20:03:17.300487  470544 out.go:179] * Starting "ha-579393" primary control-plane node in "ha-579393" cluster
	I1014 20:03:17.301710  470544 cache.go:123] Beginning downloading kic base image for docker with crio
	I1014 20:03:17.302965  470544 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1014 20:03:17.304134  470544 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 20:03:17.304173  470544 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-413763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1014 20:03:17.304183  470544 cache.go:58] Caching tarball of preloaded images
	I1014 20:03:17.304233  470544 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1014 20:03:17.304269  470544 preload.go:233] Found /home/jenkins/minikube-integration/21409-413763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1014 20:03:17.304279  470544 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1014 20:03:17.304557  470544 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/config.json ...
	I1014 20:03:17.304580  470544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/config.json: {Name:mk533f81ade9d1a5f526dccc10d22b964ab1abab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:03:17.326336  470544 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1014 20:03:17.326357  470544 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1014 20:03:17.326374  470544 cache.go:232] Successfully downloaded all kic artifacts
	I1014 20:03:17.326399  470544 start.go:360] acquireMachinesLock for ha-579393: {Name:mk1f05a584fb3418ecbc78031a467c0e06c7a892 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 20:03:17.327173  470544 start.go:364] duration metric: took 757.56µs to acquireMachinesLock for "ha-579393"
	I1014 20:03:17.327207  470544 start.go:93] Provisioning new machine with config: &{Name:ha-579393 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-579393 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 20:03:17.327266  470544 start.go:125] createHost starting for "" (driver="docker")
	I1014 20:03:17.329132  470544 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1014 20:03:17.329332  470544 start.go:159] libmachine.API.Create for "ha-579393" (driver="docker")
	I1014 20:03:17.329358  470544 client.go:168] LocalClient.Create starting
	I1014 20:03:17.329426  470544 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem
	I1014 20:03:17.329458  470544 main.go:141] libmachine: Decoding PEM data...
	I1014 20:03:17.329469  470544 main.go:141] libmachine: Parsing certificate...
	I1014 20:03:17.329531  470544 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem
	I1014 20:03:17.329556  470544 main.go:141] libmachine: Decoding PEM data...
	I1014 20:03:17.329563  470544 main.go:141] libmachine: Parsing certificate...
	I1014 20:03:17.329904  470544 cli_runner.go:164] Run: docker network inspect ha-579393 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1014 20:03:17.347467  470544 cli_runner.go:211] docker network inspect ha-579393 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1014 20:03:17.347535  470544 network_create.go:284] running [docker network inspect ha-579393] to gather additional debugging logs...
	I1014 20:03:17.347555  470544 cli_runner.go:164] Run: docker network inspect ha-579393
	W1014 20:03:17.364018  470544 cli_runner.go:211] docker network inspect ha-579393 returned with exit code 1
	I1014 20:03:17.364049  470544 network_create.go:287] error running [docker network inspect ha-579393]: docker network inspect ha-579393: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-579393 not found
	I1014 20:03:17.364062  470544 network_create.go:289] output of [docker network inspect ha-579393]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-579393 not found
	
	** /stderr **
	I1014 20:03:17.364179  470544 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1014 20:03:17.381335  470544 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001946000}
	I1014 20:03:17.381374  470544 network_create.go:124] attempt to create docker network ha-579393 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1014 20:03:17.381422  470544 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-579393 ha-579393
	I1014 20:03:17.438306  470544 network_create.go:108] docker network ha-579393 192.168.49.0/24 created
	I1014 20:03:17.438342  470544 kic.go:121] calculated static IP "192.168.49.2" for the "ha-579393" container
	I1014 20:03:17.438422  470544 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1014 20:03:17.455388  470544 cli_runner.go:164] Run: docker volume create ha-579393 --label name.minikube.sigs.k8s.io=ha-579393 --label created_by.minikube.sigs.k8s.io=true
	I1014 20:03:17.474494  470544 oci.go:103] Successfully created a docker volume ha-579393
	I1014 20:03:17.474585  470544 cli_runner.go:164] Run: docker run --rm --name ha-579393-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-579393 --entrypoint /usr/bin/test -v ha-579393:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1014 20:03:17.868197  470544 oci.go:107] Successfully prepared a docker volume ha-579393
	I1014 20:03:17.868264  470544 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 20:03:17.868291  470544 kic.go:194] Starting extracting preloaded images to volume ...
	I1014 20:03:17.868380  470544 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21409-413763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-579393:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	I1014 20:03:22.341626  470544 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21409-413763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-579393:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (4.473193247s)
	I1014 20:03:22.341663  470544 kic.go:203] duration metric: took 4.47336734s to extract preloaded images to volume ...
	W1014 20:03:22.341815  470544 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1014 20:03:22.341863  470544 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1014 20:03:22.341916  470544 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1014 20:03:22.400050  470544 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-579393 --name ha-579393 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-579393 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-579393 --network ha-579393 --ip 192.168.49.2 --volume ha-579393:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1014 20:03:22.677726  470544 cli_runner.go:164] Run: docker container inspect ha-579393 --format={{.State.Running}}
	I1014 20:03:22.696026  470544 cli_runner.go:164] Run: docker container inspect ha-579393 --format={{.State.Status}}
	I1014 20:03:22.715378  470544 cli_runner.go:164] Run: docker exec ha-579393 stat /var/lib/dpkg/alternatives/iptables
	I1014 20:03:22.762223  470544 oci.go:144] the created container "ha-579393" has a running status.
	I1014 20:03:22.762255  470544 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa...
	I1014 20:03:22.820780  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1014 20:03:22.820832  470544 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1014 20:03:22.850515  470544 cli_runner.go:164] Run: docker container inspect ha-579393 --format={{.State.Status}}
	I1014 20:03:22.870190  470544 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1014 20:03:22.870210  470544 kic_runner.go:114] Args: [docker exec --privileged ha-579393 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1014 20:03:22.912447  470544 cli_runner.go:164] Run: docker container inspect ha-579393 --format={{.State.Status}}
	I1014 20:03:22.934356  470544 machine.go:93] provisionDockerMachine start ...
	I1014 20:03:22.934472  470544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:03:22.954394  470544 main.go:141] libmachine: Using SSH client type: native
	I1014 20:03:22.954768  470544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32903 <nil> <nil>}
	I1014 20:03:22.954796  470544 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 20:03:22.955439  470544 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50642->127.0.0.1:32903: read: connection reset by peer
	I1014 20:03:26.104260  470544 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-579393
	
	I1014 20:03:26.104298  470544 ubuntu.go:182] provisioning hostname "ha-579393"
	I1014 20:03:26.104379  470544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:03:26.122921  470544 main.go:141] libmachine: Using SSH client type: native
	I1014 20:03:26.123167  470544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32903 <nil> <nil>}
	I1014 20:03:26.123185  470544 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-579393 && echo "ha-579393" | sudo tee /etc/hostname
	I1014 20:03:26.281180  470544 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-579393
	
	I1014 20:03:26.281286  470544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:03:26.299367  470544 main.go:141] libmachine: Using SSH client type: native
	I1014 20:03:26.299579  470544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32903 <nil> <nil>}
	I1014 20:03:26.299596  470544 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-579393' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-579393/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-579393' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 20:03:26.445909  470544 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 20:03:26.445941  470544 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-413763/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-413763/.minikube}
	I1014 20:03:26.445960  470544 ubuntu.go:190] setting up certificates
	I1014 20:03:26.445974  470544 provision.go:84] configureAuth start
	I1014 20:03:26.446042  470544 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-579393
	I1014 20:03:26.463014  470544 provision.go:143] copyHostCerts
	I1014 20:03:26.463059  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem
	I1014 20:03:26.463090  470544 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem, removing ...
	I1014 20:03:26.463099  470544 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem
	I1014 20:03:26.463169  470544 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem (1078 bytes)
	I1014 20:03:26.463255  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem
	I1014 20:03:26.463272  470544 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem, removing ...
	I1014 20:03:26.463279  470544 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem
	I1014 20:03:26.463304  470544 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem (1123 bytes)
	I1014 20:03:26.463350  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem
	I1014 20:03:26.463367  470544 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem, removing ...
	I1014 20:03:26.463373  470544 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem
	I1014 20:03:26.463396  470544 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem (1675 bytes)
	I1014 20:03:26.463447  470544 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca-key.pem org=jenkins.ha-579393 san=[127.0.0.1 192.168.49.2 ha-579393 localhost minikube]
	I1014 20:03:26.617910  470544 provision.go:177] copyRemoteCerts
	I1014 20:03:26.617976  470544 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 20:03:26.618022  470544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:03:26.636120  470544 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:03:26.739380  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1014 20:03:26.739452  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 20:03:26.759232  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1014 20:03:26.759293  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1014 20:03:26.778271  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1014 20:03:26.778338  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1014 20:03:26.796388  470544 provision.go:87] duration metric: took 350.39932ms to configureAuth
	I1014 20:03:26.796420  470544 ubuntu.go:206] setting minikube options for container-runtime
	I1014 20:03:26.796596  470544 config.go:182] Loaded profile config "ha-579393": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:03:26.796705  470544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:03:26.816035  470544 main.go:141] libmachine: Using SSH client type: native
	I1014 20:03:26.816243  470544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32903 <nil> <nil>}
	I1014 20:03:26.816259  470544 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 20:03:27.082126  470544 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 20:03:27.082156  470544 machine.go:96] duration metric: took 4.147772563s to provisionDockerMachine
	I1014 20:03:27.082171  470544 client.go:171] duration metric: took 9.752806403s to LocalClient.Create
	I1014 20:03:27.082197  470544 start.go:167] duration metric: took 9.752866506s to libmachine.API.Create "ha-579393"
	I1014 20:03:27.082205  470544 start.go:293] postStartSetup for "ha-579393" (driver="docker")
	I1014 20:03:27.082215  470544 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 20:03:27.082274  470544 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 20:03:27.082316  470544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:03:27.101460  470544 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:03:27.208078  470544 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 20:03:27.212053  470544 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1014 20:03:27.212086  470544 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1014 20:03:27.212100  470544 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-413763/.minikube/addons for local assets ...
	I1014 20:03:27.212182  470544 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-413763/.minikube/files for local assets ...
	I1014 20:03:27.212277  470544 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem -> 4173732.pem in /etc/ssl/certs
	I1014 20:03:27.212288  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem -> /etc/ssl/certs/4173732.pem
	I1014 20:03:27.212396  470544 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 20:03:27.220472  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem --> /etc/ssl/certs/4173732.pem (1708 bytes)
	I1014 20:03:27.241576  470544 start.go:296] duration metric: took 159.355524ms for postStartSetup
	I1014 20:03:27.241976  470544 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-579393
	I1014 20:03:27.259468  470544 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/config.json ...
	I1014 20:03:27.259849  470544 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1014 20:03:27.259907  470544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:03:27.277799  470544 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:03:27.378323  470544 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1014 20:03:27.383519  470544 start.go:128] duration metric: took 10.056234444s to createHost
	I1014 20:03:27.383548  470544 start.go:83] releasing machines lock for "ha-579393", held for 10.056356237s
	I1014 20:03:27.383629  470544 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-579393
	I1014 20:03:27.401699  470544 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 20:03:27.401709  470544 ssh_runner.go:195] Run: cat /version.json
	I1014 20:03:27.401815  470544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:03:27.401838  470544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:03:27.420176  470544 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:03:27.421057  470544 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:03:27.574708  470544 ssh_runner.go:195] Run: systemctl --version
	I1014 20:03:27.581776  470544 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 20:03:27.618049  470544 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 20:03:27.622981  470544 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 20:03:27.623059  470544 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 20:03:27.650696  470544 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1014 20:03:27.650726  470544 start.go:495] detecting cgroup driver to use...
	I1014 20:03:27.650795  470544 detect.go:190] detected "systemd" cgroup driver on host os
	I1014 20:03:27.650860  470544 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 20:03:27.668397  470544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 20:03:27.681391  470544 docker.go:218] disabling cri-docker service (if available) ...
	I1014 20:03:27.681446  470544 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 20:03:27.698246  470544 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 20:03:27.716479  470544 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 20:03:27.798818  470544 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 20:03:27.884317  470544 docker.go:234] disabling docker service ...
	I1014 20:03:27.884384  470544 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 20:03:27.905126  470544 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 20:03:27.918827  470544 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 20:03:28.002081  470544 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 20:03:28.084842  470544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 20:03:28.098220  470544 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 20:03:28.113305  470544 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1014 20:03:28.113364  470544 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:03:28.124477  470544 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1014 20:03:28.124559  470544 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:03:28.134261  470544 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:03:28.144071  470544 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:03:28.154359  470544 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 20:03:28.163636  470544 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:03:28.173644  470544 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:03:28.188326  470544 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:03:28.198228  470544 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 20:03:28.206234  470544 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 20:03:28.214019  470544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 20:03:28.295010  470544 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 20:03:28.401206  470544 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 20:03:28.401272  470544 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 20:03:28.405522  470544 start.go:563] Will wait 60s for crictl version
	I1014 20:03:28.405585  470544 ssh_runner.go:195] Run: which crictl
	I1014 20:03:28.409373  470544 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1014 20:03:28.435266  470544 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1014 20:03:28.435335  470544 ssh_runner.go:195] Run: crio --version
	I1014 20:03:28.465834  470544 ssh_runner.go:195] Run: crio --version
	I1014 20:03:28.497274  470544 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1014 20:03:28.498593  470544 cli_runner.go:164] Run: docker network inspect ha-579393 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1014 20:03:28.517029  470544 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1014 20:03:28.521498  470544 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 20:03:28.532817  470544 kubeadm.go:883] updating cluster {Name:ha-579393 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-579393 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 20:03:28.532940  470544 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 20:03:28.532992  470544 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 20:03:28.565925  470544 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 20:03:28.565951  470544 crio.go:433] Images already preloaded, skipping extraction
	I1014 20:03:28.566006  470544 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 20:03:28.592978  470544 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 20:03:28.593003  470544 cache_images.go:85] Images are preloaded, skipping loading
	I1014 20:03:28.593011  470544 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1014 20:03:28.593109  470544 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-579393 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-579393 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 20:03:28.593172  470544 ssh_runner.go:195] Run: crio config
	I1014 20:03:28.638570  470544 cni.go:84] Creating CNI manager for ""
	I1014 20:03:28.638590  470544 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1014 20:03:28.638604  470544 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1014 20:03:28.638626  470544 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-579393 NodeName:ha-579393 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1014 20:03:28.638736  470544 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-579393"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 20:03:28.638778  470544 kube-vip.go:115] generating kube-vip config ...
	I1014 20:03:28.638827  470544 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1014 20:03:28.651221  470544 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1014 20:03:28.651322  470544 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1014 20:03:28.651371  470544 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1014 20:03:28.659733  470544 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 20:03:28.659825  470544 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1014 20:03:28.667977  470544 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1014 20:03:28.681172  470544 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 20:03:28.697239  470544 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1014 20:03:28.710080  470544 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I1014 20:03:28.724688  470544 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1014 20:03:28.728568  470544 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 20:03:28.738656  470544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 20:03:28.817749  470544 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 20:03:28.841528  470544 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393 for IP: 192.168.49.2
	I1014 20:03:28.841566  470544 certs.go:195] generating shared ca certs ...
	I1014 20:03:28.841587  470544 certs.go:227] acquiring lock for ca certs: {Name:mk332cc83f1e1747534fc81e896bed94f24941d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:03:28.841727  470544 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.key
	I1014 20:03:28.841805  470544 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.key
	I1014 20:03:28.841821  470544 certs.go:257] generating profile certs ...
	I1014 20:03:28.841874  470544 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/client.key
	I1014 20:03:28.841897  470544 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/client.crt with IP's: []
	I1014 20:03:29.018063  470544 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/client.crt ...
	I1014 20:03:29.018101  470544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/client.crt: {Name:mk8b90bc05b294b6c05e808012d45472c3093f67 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:03:29.018299  470544 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/client.key ...
	I1014 20:03:29.018321  470544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/client.key: {Name:mk4670db425ebf46f3bf4968573343a975480683 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:03:29.018407  470544 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key.6d4a1612
	I1014 20:03:29.018424  470544 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt.6d4a1612 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I1014 20:03:29.208082  470544 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt.6d4a1612 ...
	I1014 20:03:29.208118  470544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt.6d4a1612: {Name:mk2e48e06bd7a0fd2aa3ea9def795ac03bded956 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:03:29.208287  470544 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key.6d4a1612 ...
	I1014 20:03:29.208300  470544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key.6d4a1612: {Name:mkc6fe9b4a3330b4fa61a71beeb137e948294421 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:03:29.209199  470544 certs.go:382] copying /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt.6d4a1612 -> /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt
	I1014 20:03:29.209315  470544 certs.go:386] copying /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key.6d4a1612 -> /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key
	I1014 20:03:29.209373  470544 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.key
	I1014 20:03:29.209389  470544 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.crt with IP's: []
	I1014 20:03:29.349734  470544 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.crt ...
	I1014 20:03:29.349788  470544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.crt: {Name:mk3c38e66fa21f9bf9f031b0b611fbb1d8c4882a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:03:29.349962  470544 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.key ...
	I1014 20:03:29.349973  470544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.key: {Name:mke28e4de33c7a0d50feb0b1335c5cd9e94d81db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:03:29.350047  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1014 20:03:29.350064  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1014 20:03:29.350075  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1014 20:03:29.350087  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1014 20:03:29.350099  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1014 20:03:29.350109  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1014 20:03:29.350122  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1014 20:03:29.350132  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1014 20:03:29.350183  470544 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/417373.pem (1338 bytes)
	W1014 20:03:29.350228  470544 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-413763/.minikube/certs/417373_empty.pem, impossibly tiny 0 bytes
	I1014 20:03:29.350237  470544 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca-key.pem (1675 bytes)
	I1014 20:03:29.350258  470544 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem (1078 bytes)
	I1014 20:03:29.350280  470544 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem (1123 bytes)
	I1014 20:03:29.350300  470544 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem (1675 bytes)
	I1014 20:03:29.350336  470544 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem (1708 bytes)
	I1014 20:03:29.350360  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:03:29.350373  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/417373.pem -> /usr/share/ca-certificates/417373.pem
	I1014 20:03:29.350387  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem -> /usr/share/ca-certificates/4173732.pem
	I1014 20:03:29.350927  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 20:03:29.369482  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1014 20:03:29.386797  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 20:03:29.404413  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1014 20:03:29.421955  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1014 20:03:29.439808  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1014 20:03:29.457222  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 20:03:29.475143  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1014 20:03:29.493300  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 20:03:29.513957  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/certs/417373.pem --> /usr/share/ca-certificates/417373.pem (1338 bytes)
	I1014 20:03:29.535163  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem --> /usr/share/ca-certificates/4173732.pem (1708 bytes)
	I1014 20:03:29.554358  470544 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 20:03:29.567671  470544 ssh_runner.go:195] Run: openssl version
	I1014 20:03:29.574116  470544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4173732.pem && ln -fs /usr/share/ca-certificates/4173732.pem /etc/ssl/certs/4173732.pem"
	I1014 20:03:29.582980  470544 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4173732.pem
	I1014 20:03:29.586713  470544 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 19:32 /usr/share/ca-certificates/4173732.pem
	I1014 20:03:29.586836  470544 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4173732.pem
	I1014 20:03:29.620973  470544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4173732.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 20:03:29.629990  470544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 20:03:29.638580  470544 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:03:29.642541  470544 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 19:15 /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:03:29.642595  470544 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:03:29.677097  470544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 20:03:29.687002  470544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/417373.pem && ln -fs /usr/share/ca-certificates/417373.pem /etc/ssl/certs/417373.pem"
	I1014 20:03:29.696267  470544 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/417373.pem
	I1014 20:03:29.700535  470544 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 19:32 /usr/share/ca-certificates/417373.pem
	I1014 20:03:29.700593  470544 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/417373.pem
	I1014 20:03:29.734895  470544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/417373.pem /etc/ssl/certs/51391683.0"
	I1014 20:03:29.744295  470544 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 20:03:29.748240  470544 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1014 20:03:29.748305  470544 kubeadm.go:400] StartCluster: {Name:ha-579393 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-579393 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 20:03:29.748380  470544 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 20:03:29.748448  470544 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 20:03:29.777054  470544 cri.go:89] found id: ""
	I1014 20:03:29.777134  470544 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 20:03:29.785507  470544 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 20:03:29.793651  470544 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1014 20:03:29.793711  470544 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 20:03:29.801881  470544 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 20:03:29.801906  470544 kubeadm.go:157] found existing configuration files:
	
	I1014 20:03:29.801956  470544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 20:03:29.809948  470544 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 20:03:29.810011  470544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 20:03:29.817979  470544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 20:03:29.825985  470544 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 20:03:29.826064  470544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 20:03:29.833833  470544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 20:03:29.842078  470544 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 20:03:29.842149  470544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 20:03:29.850122  470544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 20:03:29.858250  470544 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 20:03:29.858312  470544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 20:03:29.866004  470544 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1014 20:03:29.905901  470544 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1014 20:03:29.906013  470544 kubeadm.go:318] [preflight] Running pre-flight checks
	I1014 20:03:29.928412  470544 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1014 20:03:29.928498  470544 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1014 20:03:29.928541  470544 kubeadm.go:318] OS: Linux
	I1014 20:03:29.928583  470544 kubeadm.go:318] CGROUPS_CPU: enabled
	I1014 20:03:29.928652  470544 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1014 20:03:29.928730  470544 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1014 20:03:29.928805  470544 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1014 20:03:29.928849  470544 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1014 20:03:29.928892  470544 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1014 20:03:29.928935  470544 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1014 20:03:29.928973  470544 kubeadm.go:318] CGROUPS_IO: enabled
	I1014 20:03:29.989181  470544 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1014 20:03:29.989342  470544 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1014 20:03:29.989457  470544 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1014 20:03:29.997476  470544 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1014 20:03:30.000428  470544 out.go:252]   - Generating certificates and keys ...
	I1014 20:03:30.000531  470544 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1014 20:03:30.000656  470544 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1014 20:03:30.367367  470544 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1014 20:03:30.888441  470544 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1014 20:03:31.416284  470544 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1014 20:03:31.486302  470544 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1014 20:03:32.293304  470544 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1014 20:03:32.293457  470544 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [ha-579393 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1014 20:03:32.436942  470544 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1014 20:03:32.437134  470544 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [ha-579393 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1014 20:03:32.740861  470544 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1014 20:03:32.874202  470544 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1014 20:03:33.330864  470544 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1014 20:03:33.330961  470544 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1014 20:03:33.434687  470544 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1014 20:03:33.590351  470544 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1014 20:03:33.928031  470544 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1014 20:03:34.042691  470544 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1014 20:03:34.576186  470544 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1014 20:03:34.576637  470544 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1014 20:03:34.579016  470544 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1014 20:03:34.581440  470544 out.go:252]   - Booting up control plane ...
	I1014 20:03:34.581593  470544 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1014 20:03:34.581712  470544 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1014 20:03:34.581832  470544 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1014 20:03:34.595204  470544 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1014 20:03:34.595404  470544 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1014 20:03:34.601919  470544 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1014 20:03:34.602142  470544 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1014 20:03:34.602243  470544 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1014 20:03:34.699612  470544 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1014 20:03:34.699737  470544 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1014 20:03:35.200483  470544 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 500.970561ms
	I1014 20:03:35.205501  470544 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1014 20:03:35.205636  470544 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1014 20:03:35.205873  470544 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1014 20:03:35.205987  470544 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1014 20:07:35.206930  470544 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000676337s
	I1014 20:07:35.207172  470544 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000720088s
	I1014 20:07:35.207359  470544 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000743417s
	I1014 20:07:35.207371  470544 kubeadm.go:318] 
	I1014 20:07:35.207694  470544 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1014 20:07:35.208049  470544 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1014 20:07:35.208276  470544 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1014 20:07:35.208532  470544 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1014 20:07:35.208786  470544 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1014 20:07:35.209074  470544 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1014 20:07:35.209100  470544 kubeadm.go:318] 
	I1014 20:07:35.211976  470544 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1014 20:07:35.212145  470544 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1014 20:07:35.212706  470544 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1014 20:07:35.212843  470544 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	W1014 20:07:35.212972  470544 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-579393 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-579393 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 500.970561ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000676337s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000720088s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000743417s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1014 20:07:35.213050  470544 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1014 20:07:37.966951  470544 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.753873381s)
	I1014 20:07:37.967030  470544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 20:07:37.980538  470544 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1014 20:07:37.980613  470544 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 20:07:37.988822  470544 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 20:07:37.988844  470544 kubeadm.go:157] found existing configuration files:
	
	I1014 20:07:37.988897  470544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 20:07:37.996970  470544 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 20:07:37.997051  470544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 20:07:38.004797  470544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 20:07:38.012635  470544 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 20:07:38.012702  470544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 20:07:38.020175  470544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 20:07:38.028386  470544 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 20:07:38.028440  470544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 20:07:38.036154  470544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 20:07:38.044027  470544 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 20:07:38.044088  470544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 20:07:38.051422  470544 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1014 20:07:38.110505  470544 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1014 20:07:38.170186  470544 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1014 20:11:40.721242  470544 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1014 20:11:40.721491  470544 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1014 20:11:40.724650  470544 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1014 20:11:40.724789  470544 kubeadm.go:318] [preflight] Running pre-flight checks
	I1014 20:11:40.724937  470544 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1014 20:11:40.725018  470544 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1014 20:11:40.725068  470544 kubeadm.go:318] OS: Linux
	I1014 20:11:40.725125  470544 kubeadm.go:318] CGROUPS_CPU: enabled
	I1014 20:11:40.725181  470544 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1014 20:11:40.725248  470544 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1014 20:11:40.725310  470544 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1014 20:11:40.725365  470544 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1014 20:11:40.725423  470544 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1014 20:11:40.725473  470544 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1014 20:11:40.725534  470544 kubeadm.go:318] CGROUPS_IO: enabled
	I1014 20:11:40.725639  470544 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1014 20:11:40.725782  470544 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1014 20:11:40.725977  470544 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1014 20:11:40.726087  470544 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1014 20:11:40.728584  470544 out.go:252]   - Generating certificates and keys ...
	I1014 20:11:40.728668  470544 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1014 20:11:40.728723  470544 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1014 20:11:40.728820  470544 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1014 20:11:40.728895  470544 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1014 20:11:40.728974  470544 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1014 20:11:40.729051  470544 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1014 20:11:40.729150  470544 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1014 20:11:40.729214  470544 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1014 20:11:40.729282  470544 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1014 20:11:40.729340  470544 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1014 20:11:40.729378  470544 kubeadm.go:318] [certs] Using the existing "sa" key
	I1014 20:11:40.729422  470544 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1014 20:11:40.729466  470544 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1014 20:11:40.729531  470544 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1014 20:11:40.729604  470544 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1014 20:11:40.729710  470544 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1014 20:11:40.729805  470544 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1014 20:11:40.729913  470544 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1014 20:11:40.730020  470544 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1014 20:11:40.731279  470544 out.go:252]   - Booting up control plane ...
	I1014 20:11:40.731376  470544 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1014 20:11:40.731472  470544 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1014 20:11:40.731563  470544 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1014 20:11:40.731676  470544 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1014 20:11:40.731820  470544 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1014 20:11:40.731960  470544 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1014 20:11:40.732060  470544 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1014 20:11:40.732099  470544 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1014 20:11:40.732241  470544 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1014 20:11:40.732368  470544 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1014 20:11:40.732459  470544 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001170855s
	I1014 20:11:40.732550  470544 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1014 20:11:40.732649  470544 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1014 20:11:40.732789  470544 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1014 20:11:40.732875  470544 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1014 20:11:40.732961  470544 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000981646s
	I1014 20:11:40.733076  470544 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001242346s
	I1014 20:11:40.733142  470544 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001205382s
	I1014 20:11:40.733157  470544 kubeadm.go:318] 
	I1014 20:11:40.733272  470544 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1014 20:11:40.733349  470544 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1014 20:11:40.733417  470544 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1014 20:11:40.733491  470544 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1014 20:11:40.733553  470544 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1014 20:11:40.733641  470544 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1014 20:11:40.733684  470544 kubeadm.go:318] 
	I1014 20:11:40.733748  470544 kubeadm.go:402] duration metric: took 8m10.985445817s to StartCluster
	I1014 20:11:40.733824  470544 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 20:11:40.733881  470544 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 20:11:40.762474  470544 cri.go:89] found id: ""
	I1014 20:11:40.762524  470544 logs.go:282] 0 containers: []
	W1014 20:11:40.762538  470544 logs.go:284] No container was found matching "kube-apiserver"
	I1014 20:11:40.762545  470544 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 20:11:40.762602  470544 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 20:11:40.789961  470544 cri.go:89] found id: ""
	I1014 20:11:40.789989  470544 logs.go:282] 0 containers: []
	W1014 20:11:40.789999  470544 logs.go:284] No container was found matching "etcd"
	I1014 20:11:40.790007  470544 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 20:11:40.790062  470544 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 20:11:40.817095  470544 cri.go:89] found id: ""
	I1014 20:11:40.817128  470544 logs.go:282] 0 containers: []
	W1014 20:11:40.817141  470544 logs.go:284] No container was found matching "coredns"
	I1014 20:11:40.817148  470544 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 20:11:40.817206  470544 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 20:11:40.843942  470544 cri.go:89] found id: ""
	I1014 20:11:40.843974  470544 logs.go:282] 0 containers: []
	W1014 20:11:40.843984  470544 logs.go:284] No container was found matching "kube-scheduler"
	I1014 20:11:40.843991  470544 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 20:11:40.844054  470544 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 20:11:40.870262  470544 cri.go:89] found id: ""
	I1014 20:11:40.870289  470544 logs.go:282] 0 containers: []
	W1014 20:11:40.870299  470544 logs.go:284] No container was found matching "kube-proxy"
	I1014 20:11:40.870308  470544 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 20:11:40.870377  470544 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 20:11:40.896558  470544 cri.go:89] found id: ""
	I1014 20:11:40.896588  470544 logs.go:282] 0 containers: []
	W1014 20:11:40.896597  470544 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 20:11:40.896604  470544 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 20:11:40.896660  470544 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 20:11:40.923171  470544 cri.go:89] found id: ""
	I1014 20:11:40.923202  470544 logs.go:282] 0 containers: []
	W1014 20:11:40.923214  470544 logs.go:284] No container was found matching "kindnet"
	I1014 20:11:40.923225  470544 logs.go:123] Gathering logs for kubelet ...
	I1014 20:11:40.923237  470544 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 20:11:40.991897  470544 logs.go:123] Gathering logs for dmesg ...
	I1014 20:11:40.991944  470544 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 20:11:41.010371  470544 logs.go:123] Gathering logs for describe nodes ...
	I1014 20:11:41.010404  470544 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 20:11:41.071387  470544 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 20:11:41.064404    2582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:11:41.064977    2582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:11:41.066171    2582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:11:41.066648    2582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:11:41.068287    2582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 20:11:41.064404    2582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:11:41.064977    2582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:11:41.066171    2582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:11:41.066648    2582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:11:41.068287    2582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 20:11:41.071407  470544 logs.go:123] Gathering logs for CRI-O ...
	I1014 20:11:41.071419  470544 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 20:11:41.133347  470544 logs.go:123] Gathering logs for container status ...
	I1014 20:11:41.133392  470544 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1014 20:11:41.166639  470544 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001170855s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000981646s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001242346s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001205382s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1014 20:11:41.166697  470544 out.go:285] * 
	W1014 20:11:41.166793  470544 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001170855s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000981646s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001242346s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001205382s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1014 20:11:41.166813  470544 out.go:285] * 
	W1014 20:11:41.168436  470544 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 20:11:41.172303  470544 out.go:203] 
	W1014 20:11:41.173765  470544 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001170855s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000981646s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001242346s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001205382s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1014 20:11:41.173801  470544 out.go:285] * 
	I1014 20:11:41.176311  470544 out.go:203] 
	
	
	==> CRI-O <==
	Oct 14 20:13:08 ha-579393 crio[778]: time="2025-10-14T20:13:08.480033374Z" level=info msg="createCtr: removing container 018575602c57ffad52b1c85d2a85ab388b8490a6071902e6c324528074a19f73" id=c2685434-8e39-490a-9a29-67cc288e8fd0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:13:08 ha-579393 crio[778]: time="2025-10-14T20:13:08.480077995Z" level=info msg="createCtr: deleting container 018575602c57ffad52b1c85d2a85ab388b8490a6071902e6c324528074a19f73 from storage" id=c2685434-8e39-490a-9a29-67cc288e8fd0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:13:08 ha-579393 crio[778]: time="2025-10-14T20:13:08.482906147Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-ha-579393_kube-system_06869f9c490a5fffb50940ac23939d18_0" id=c2685434-8e39-490a-9a29-67cc288e8fd0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:13:10 ha-579393 crio[778]: time="2025-10-14T20:13:10.453340908Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=fc3db041-30ba-4ea4-91c9-263af93e3f4c name=/runtime.v1.ImageService/ImageStatus
	Oct 14 20:13:10 ha-579393 crio[778]: time="2025-10-14T20:13:10.453422912Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=f7739051-0a80-4b38-bcfa-7f3e19340dd0 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 20:13:10 ha-579393 crio[778]: time="2025-10-14T20:13:10.454245121Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=4a6115c9-bcee-4f37-86ae-ac8a0cbe50ae name=/runtime.v1.ImageService/ImageStatus
	Oct 14 20:13:10 ha-579393 crio[778]: time="2025-10-14T20:13:10.454281251Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=d263eed7-1f54-448e-9f29-81542c68d11b name=/runtime.v1.ImageService/ImageStatus
	Oct 14 20:13:10 ha-579393 crio[778]: time="2025-10-14T20:13:10.455657405Z" level=info msg="Creating container: kube-system/kube-scheduler-ha-579393/kube-scheduler" id=e32b6156-17c6-4402-8950-e4d2f9d0f6d5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:13:10 ha-579393 crio[778]: time="2025-10-14T20:13:10.455797492Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-579393/kube-controller-manager" id=3af1e8e1-bda9-4e3c-aac9-faa5acf750a7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:13:10 ha-579393 crio[778]: time="2025-10-14T20:13:10.455895265Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:13:10 ha-579393 crio[778]: time="2025-10-14T20:13:10.455984585Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:13:10 ha-579393 crio[778]: time="2025-10-14T20:13:10.460313441Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:13:10 ha-579393 crio[778]: time="2025-10-14T20:13:10.46080845Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:13:10 ha-579393 crio[778]: time="2025-10-14T20:13:10.462620275Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:13:10 ha-579393 crio[778]: time="2025-10-14T20:13:10.463540928Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:13:10 ha-579393 crio[778]: time="2025-10-14T20:13:10.481277661Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=e32b6156-17c6-4402-8950-e4d2f9d0f6d5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:13:10 ha-579393 crio[778]: time="2025-10-14T20:13:10.482411037Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=3af1e8e1-bda9-4e3c-aac9-faa5acf750a7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:13:10 ha-579393 crio[778]: time="2025-10-14T20:13:10.483425157Z" level=info msg="createCtr: deleting container ID 2559defe78404da161f15c0378570771ac93c99e54de5025a93bb1fb617f4c21 from idIndex" id=e32b6156-17c6-4402-8950-e4d2f9d0f6d5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:13:10 ha-579393 crio[778]: time="2025-10-14T20:13:10.483466718Z" level=info msg="createCtr: removing container 2559defe78404da161f15c0378570771ac93c99e54de5025a93bb1fb617f4c21" id=e32b6156-17c6-4402-8950-e4d2f9d0f6d5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:13:10 ha-579393 crio[778]: time="2025-10-14T20:13:10.483515818Z" level=info msg="createCtr: deleting container 2559defe78404da161f15c0378570771ac93c99e54de5025a93bb1fb617f4c21 from storage" id=e32b6156-17c6-4402-8950-e4d2f9d0f6d5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:13:10 ha-579393 crio[778]: time="2025-10-14T20:13:10.484251179Z" level=info msg="createCtr: deleting container ID 3d1438db36cf6083982a14308f3b3714448928a48af832622e1066d81b9d28a9 from idIndex" id=3af1e8e1-bda9-4e3c-aac9-faa5acf750a7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:13:10 ha-579393 crio[778]: time="2025-10-14T20:13:10.484307314Z" level=info msg="createCtr: removing container 3d1438db36cf6083982a14308f3b3714448928a48af832622e1066d81b9d28a9" id=3af1e8e1-bda9-4e3c-aac9-faa5acf750a7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:13:10 ha-579393 crio[778]: time="2025-10-14T20:13:10.484346692Z" level=info msg="createCtr: deleting container 3d1438db36cf6083982a14308f3b3714448928a48af832622e1066d81b9d28a9 from storage" id=3af1e8e1-bda9-4e3c-aac9-faa5acf750a7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:13:10 ha-579393 crio[778]: time="2025-10-14T20:13:10.487513342Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-ha-579393_kube-system_8c15ab9dd5834e64ae44874faddf585d_0" id=e32b6156-17c6-4402-8950-e4d2f9d0f6d5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:13:10 ha-579393 crio[778]: time="2025-10-14T20:13:10.48912498Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-579393_kube-system_514451ea1eb9e52e24cc36daace2ea4a_0" id=3af1e8e1-bda9-4e3c-aac9-faa5acf750a7 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 20:13:10.989593    4051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:13:10.990189    4051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:13:10.991917    4051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:13:10.992312    4051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:13:10.994318    4051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 91 42 a8 46 f5 08 06
	[ +33.952077] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff aa 17 60 38 7c 63 08 06
	[  +0.961658] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ba 77 81 98 05 a1 08 06
	[  +0.043334] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 8a 2f 57 55 4c d7 08 06
	[  +4.681081] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f6 ac 51 a4 b2 5a 08 06
	[Oct14 19:04] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e6 d1 de e1 85 25 08 06
	[  +0.899784] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ae f2 fe 51 3d 95 08 06
	[  +0.040749] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 82 c6 36 89 b8 4a 08 06
	[  +4.162603] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c6 62 d5 5c 0e ef 08 06
	[ +30.801371] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 06 ae e5 28 c7 39 08 06
	[  +0.817897] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ae 5e 5e 90 62 1a 08 06
	[  +0.046959] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 96 8f 1b bd b9 9f 08 06
	[Oct14 19:05] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 56 59 c3 7c db c4 08 06
	
	
	==> kernel <==
	 20:13:11 up  2:55,  0 user,  load average: 0.22, 0.09, 0.51
	Linux ha-579393 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 14 20:13:08 ha-579393 kubelet[1963]: E1014 20:13:08.483283    1963 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 14 20:13:08 ha-579393 kubelet[1963]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 14 20:13:08 ha-579393 kubelet[1963]:  > podSandboxID="26eabc9a05c338cff1ebd4ea1b580692dcb1accc6b0e23f61f6a228d1f73adce"
	Oct 14 20:13:08 ha-579393 kubelet[1963]: E1014 20:13:08.483427    1963 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 14 20:13:08 ha-579393 kubelet[1963]:         container kube-apiserver start failed in pod kube-apiserver-ha-579393_kube-system(06869f9c490a5fffb50940ac23939d18): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 14 20:13:08 ha-579393 kubelet[1963]:  > logger="UnhandledError"
	Oct 14 20:13:08 ha-579393 kubelet[1963]: E1014 20:13:08.483475    1963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-ha-579393" podUID="06869f9c490a5fffb50940ac23939d18"
	Oct 14 20:13:08 ha-579393 kubelet[1963]: E1014 20:13:08.641737    1963 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-579393.186e746019ae0b94  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-579393,UID:ha-579393,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-579393 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-579393,},FirstTimestamp:2025-10-14 20:07:40.444961684 +0000 UTC m=+0.732526825,LastTimestamp:2025-10-14 20:07:40.444961684 +0000 UTC m=+0.732526825,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-579393,}"
	Oct 14 20:13:10 ha-579393 kubelet[1963]: E1014 20:13:10.452890    1963 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-579393\" not found" node="ha-579393"
	Oct 14 20:13:10 ha-579393 kubelet[1963]: E1014 20:13:10.453030    1963 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-579393\" not found" node="ha-579393"
	Oct 14 20:13:10 ha-579393 kubelet[1963]: E1014 20:13:10.474073    1963 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-579393\" not found"
	Oct 14 20:13:10 ha-579393 kubelet[1963]: E1014 20:13:10.487830    1963 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 14 20:13:10 ha-579393 kubelet[1963]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 14 20:13:10 ha-579393 kubelet[1963]:  > podSandboxID="d0a8c2929974ece2a9096ac441dce40bed26c1b0ec13fe00bf80ae77bedc2f7c"
	Oct 14 20:13:10 ha-579393 kubelet[1963]: E1014 20:13:10.487965    1963 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 14 20:13:10 ha-579393 kubelet[1963]:         container kube-scheduler start failed in pod kube-scheduler-ha-579393_kube-system(8c15ab9dd5834e64ae44874faddf585d): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 14 20:13:10 ha-579393 kubelet[1963]:  > logger="UnhandledError"
	Oct 14 20:13:10 ha-579393 kubelet[1963]: E1014 20:13:10.488012    1963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-ha-579393" podUID="8c15ab9dd5834e64ae44874faddf585d"
	Oct 14 20:13:10 ha-579393 kubelet[1963]: E1014 20:13:10.489414    1963 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 14 20:13:10 ha-579393 kubelet[1963]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 14 20:13:10 ha-579393 kubelet[1963]:  > podSandboxID="aaede030549f8967d5aa233537563148ce2bbd3af1fde92787bd937fe5f1c93d"
	Oct 14 20:13:10 ha-579393 kubelet[1963]: E1014 20:13:10.489532    1963 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 14 20:13:10 ha-579393 kubelet[1963]:         container kube-controller-manager start failed in pod kube-controller-manager-ha-579393_kube-system(514451ea1eb9e52e24cc36daace2ea4a): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 14 20:13:10 ha-579393 kubelet[1963]:  > logger="UnhandledError"
	Oct 14 20:13:10 ha-579393 kubelet[1963]: E1014 20:13:10.489571    1963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-579393" podUID="514451ea1eb9e52e24cc36daace2ea4a"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-579393 -n ha-579393
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-579393 -n ha-579393: exit status 6 (303.769291ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1014 20:13:11.380164  480559 status.go:458] kubeconfig endpoint: get endpoint: "ha-579393" does not appear in /home/jenkins/minikube-integration/21409-413763/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "ha-579393" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (1.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (1.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:415: expected profile "ha-579393" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-579393\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-579393\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSS
haresRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-579393\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":nul
l,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-linux-amd64 profile list
--output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-579393
helpers_test.go:243: (dbg) docker inspect ha-579393:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2",
	        "Created": "2025-10-14T20:03:22.416166993Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 471115,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-14T20:03:22.453114626Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2/hostname",
	        "HostsPath": "/var/lib/docker/containers/e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2/hosts",
	        "LogPath": "/var/lib/docker/containers/e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2/e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2-json.log",
	        "Name": "/ha-579393",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-579393:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-579393",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2",
	                "LowerDir": "/var/lib/docker/overlay2/f2862f7f555e5163c5fb738e59e646f87e5b6f6dc38621bbd5cf8a3ccf2dc40d-init/diff:/var/lib/docker/overlay2/51cca15559205c73df9571f03495ed8a29f085405673af5fef2c1ba7c695d8af/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f2862f7f555e5163c5fb738e59e646f87e5b6f6dc38621bbd5cf8a3ccf2dc40d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f2862f7f555e5163c5fb738e59e646f87e5b6f6dc38621bbd5cf8a3ccf2dc40d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f2862f7f555e5163c5fb738e59e646f87e5b6f6dc38621bbd5cf8a3ccf2dc40d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-579393",
	                "Source": "/var/lib/docker/volumes/ha-579393/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-579393",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-579393",
	                "name.minikube.sigs.k8s.io": "ha-579393",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a1c2b52fae2ff440ed705eb97dd81ba6bb6415972c195c1ca3bec92d8e7f50f0",
	            "SandboxKey": "/var/run/docker/netns/a1c2b52fae2f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32903"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32904"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32907"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32905"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32906"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-579393": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "92:ce:80:cd:a9:2b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d686e445c44d566d468791372c74214f3cfc87adaf2fbedd68822ff7d3ce8955",
	                    "EndpointID": "9e96e7d478fc0073b7c8e78f8945763db207596a9030627a1780b04c90be2b93",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-579393",
	                        "e999454f4483"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-579393 -n ha-579393
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-579393 -n ha-579393: exit status 6 (302.282849ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1014 20:13:12.031590  480834 status.go:458] kubeconfig endpoint: get endpoint: "ha-579393" does not appear in /home/jenkins/minikube-integration/21409-413763/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-579393 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                      ARGS                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ update-context │ functional-744288 update-context --alsologtostderr -v=2                                                         │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ image          │ functional-744288 image build -t localhost/my-image:functional-744288 testdata/build --alsologtostderr          │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ update-context │ functional-744288 update-context --alsologtostderr -v=2                                                         │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ image          │ functional-744288 image ls                                                                                      │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ delete         │ -p functional-744288                                                                                            │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 20:03 UTC │ 14 Oct 25 20:03 UTC │
	│ start          │ ha-579393 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:03 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml                                                │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- rollout status deployment/busybox                                                          │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:12 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:12 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:12 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- exec  -- nslookup kubernetes.io                                                            │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- exec  -- nslookup kubernetes.default                                                       │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                                     │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ node           │ ha-579393 node add --alsologtostderr -v 5                                                                       │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ node           │ ha-579393 node stop m02 --alsologtostderr -v 5                                                                  │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/14 20:03:17
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1014 20:03:17.125360  470544 out.go:360] Setting OutFile to fd 1 ...
	I1014 20:03:17.125666  470544 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:03:17.125678  470544 out.go:374] Setting ErrFile to fd 2...
	I1014 20:03:17.125685  470544 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:03:17.125940  470544 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-413763/.minikube/bin
	I1014 20:03:17.126490  470544 out.go:368] Setting JSON to false
	I1014 20:03:17.127467  470544 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":9943,"bootTime":1760462254,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1014 20:03:17.127588  470544 start.go:141] virtualization: kvm guest
	I1014 20:03:17.129767  470544 out.go:179] * [ha-579393] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1014 20:03:17.131241  470544 out.go:179]   - MINIKUBE_LOCATION=21409
	I1014 20:03:17.131264  470544 notify.go:220] Checking for updates...
	I1014 20:03:17.134306  470544 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 20:03:17.135806  470544 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-413763/kubeconfig
	I1014 20:03:17.137119  470544 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-413763/.minikube
	I1014 20:03:17.138379  470544 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1014 20:03:17.140082  470544 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 20:03:17.141662  470544 driver.go:421] Setting default libvirt URI to qemu:///system
	I1014 20:03:17.165916  470544 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1014 20:03:17.166098  470544 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 20:03:17.229548  470544 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-14 20:03:17.218250431 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1014 20:03:17.229650  470544 docker.go:318] overlay module found
	I1014 20:03:17.231449  470544 out.go:179] * Using the docker driver based on user configuration
	I1014 20:03:17.232741  470544 start.go:305] selected driver: docker
	I1014 20:03:17.232773  470544 start.go:925] validating driver "docker" against <nil>
	I1014 20:03:17.232790  470544 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 20:03:17.233313  470544 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 20:03:17.295257  470544 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-14 20:03:17.284941769 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1014 20:03:17.295445  470544 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1014 20:03:17.295657  470544 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 20:03:17.297506  470544 out.go:179] * Using Docker driver with root privileges
	I1014 20:03:17.298873  470544 cni.go:84] Creating CNI manager for ""
	I1014 20:03:17.298932  470544 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1014 20:03:17.298947  470544 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1014 20:03:17.299040  470544 start.go:349] cluster config:
	{Name:ha-579393 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-579393 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPaus
eInterval:1m0s}
	I1014 20:03:17.300487  470544 out.go:179] * Starting "ha-579393" primary control-plane node in "ha-579393" cluster
	I1014 20:03:17.301710  470544 cache.go:123] Beginning downloading kic base image for docker with crio
	I1014 20:03:17.302965  470544 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1014 20:03:17.304134  470544 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 20:03:17.304173  470544 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-413763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1014 20:03:17.304183  470544 cache.go:58] Caching tarball of preloaded images
	I1014 20:03:17.304233  470544 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1014 20:03:17.304269  470544 preload.go:233] Found /home/jenkins/minikube-integration/21409-413763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1014 20:03:17.304279  470544 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1014 20:03:17.304557  470544 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/config.json ...
	I1014 20:03:17.304580  470544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/config.json: {Name:mk533f81ade9d1a5f526dccc10d22b964ab1abab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:03:17.326336  470544 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1014 20:03:17.326357  470544 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1014 20:03:17.326374  470544 cache.go:232] Successfully downloaded all kic artifacts
	I1014 20:03:17.326399  470544 start.go:360] acquireMachinesLock for ha-579393: {Name:mk1f05a584fb3418ecbc78031a467c0e06c7a892 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 20:03:17.327173  470544 start.go:364] duration metric: took 757.56µs to acquireMachinesLock for "ha-579393"
	I1014 20:03:17.327207  470544 start.go:93] Provisioning new machine with config: &{Name:ha-579393 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-579393 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 20:03:17.327266  470544 start.go:125] createHost starting for "" (driver="docker")
	I1014 20:03:17.329132  470544 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1014 20:03:17.329332  470544 start.go:159] libmachine.API.Create for "ha-579393" (driver="docker")
	I1014 20:03:17.329358  470544 client.go:168] LocalClient.Create starting
	I1014 20:03:17.329426  470544 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem
	I1014 20:03:17.329458  470544 main.go:141] libmachine: Decoding PEM data...
	I1014 20:03:17.329469  470544 main.go:141] libmachine: Parsing certificate...
	I1014 20:03:17.329531  470544 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem
	I1014 20:03:17.329556  470544 main.go:141] libmachine: Decoding PEM data...
	I1014 20:03:17.329563  470544 main.go:141] libmachine: Parsing certificate...
	I1014 20:03:17.329904  470544 cli_runner.go:164] Run: docker network inspect ha-579393 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1014 20:03:17.347467  470544 cli_runner.go:211] docker network inspect ha-579393 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1014 20:03:17.347535  470544 network_create.go:284] running [docker network inspect ha-579393] to gather additional debugging logs...
	I1014 20:03:17.347555  470544 cli_runner.go:164] Run: docker network inspect ha-579393
	W1014 20:03:17.364018  470544 cli_runner.go:211] docker network inspect ha-579393 returned with exit code 1
	I1014 20:03:17.364049  470544 network_create.go:287] error running [docker network inspect ha-579393]: docker network inspect ha-579393: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-579393 not found
	I1014 20:03:17.364062  470544 network_create.go:289] output of [docker network inspect ha-579393]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-579393 not found
	
	** /stderr **
	I1014 20:03:17.364179  470544 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1014 20:03:17.381335  470544 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001946000}
	I1014 20:03:17.381374  470544 network_create.go:124] attempt to create docker network ha-579393 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1014 20:03:17.381422  470544 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-579393 ha-579393
	I1014 20:03:17.438306  470544 network_create.go:108] docker network ha-579393 192.168.49.0/24 created
	I1014 20:03:17.438342  470544 kic.go:121] calculated static IP "192.168.49.2" for the "ha-579393" container
	I1014 20:03:17.438422  470544 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1014 20:03:17.455388  470544 cli_runner.go:164] Run: docker volume create ha-579393 --label name.minikube.sigs.k8s.io=ha-579393 --label created_by.minikube.sigs.k8s.io=true
	I1014 20:03:17.474494  470544 oci.go:103] Successfully created a docker volume ha-579393
	I1014 20:03:17.474585  470544 cli_runner.go:164] Run: docker run --rm --name ha-579393-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-579393 --entrypoint /usr/bin/test -v ha-579393:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1014 20:03:17.868197  470544 oci.go:107] Successfully prepared a docker volume ha-579393
	I1014 20:03:17.868264  470544 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 20:03:17.868291  470544 kic.go:194] Starting extracting preloaded images to volume ...
	I1014 20:03:17.868380  470544 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21409-413763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-579393:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	I1014 20:03:22.341626  470544 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21409-413763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-579393:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (4.473193247s)
	I1014 20:03:22.341663  470544 kic.go:203] duration metric: took 4.47336734s to extract preloaded images to volume ...
	W1014 20:03:22.341815  470544 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1014 20:03:22.341863  470544 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1014 20:03:22.341916  470544 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1014 20:03:22.400050  470544 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-579393 --name ha-579393 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-579393 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-579393 --network ha-579393 --ip 192.168.49.2 --volume ha-579393:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1014 20:03:22.677726  470544 cli_runner.go:164] Run: docker container inspect ha-579393 --format={{.State.Running}}
	I1014 20:03:22.696026  470544 cli_runner.go:164] Run: docker container inspect ha-579393 --format={{.State.Status}}
	I1014 20:03:22.715378  470544 cli_runner.go:164] Run: docker exec ha-579393 stat /var/lib/dpkg/alternatives/iptables
	I1014 20:03:22.762223  470544 oci.go:144] the created container "ha-579393" has a running status.
	I1014 20:03:22.762255  470544 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa...
	I1014 20:03:22.820780  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1014 20:03:22.820832  470544 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1014 20:03:22.850515  470544 cli_runner.go:164] Run: docker container inspect ha-579393 --format={{.State.Status}}
	I1014 20:03:22.870190  470544 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1014 20:03:22.870210  470544 kic_runner.go:114] Args: [docker exec --privileged ha-579393 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1014 20:03:22.912447  470544 cli_runner.go:164] Run: docker container inspect ha-579393 --format={{.State.Status}}
	I1014 20:03:22.934356  470544 machine.go:93] provisionDockerMachine start ...
	I1014 20:03:22.934472  470544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:03:22.954394  470544 main.go:141] libmachine: Using SSH client type: native
	I1014 20:03:22.954768  470544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32903 <nil> <nil>}
	I1014 20:03:22.954796  470544 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 20:03:22.955439  470544 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50642->127.0.0.1:32903: read: connection reset by peer
	I1014 20:03:26.104260  470544 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-579393
	
	I1014 20:03:26.104298  470544 ubuntu.go:182] provisioning hostname "ha-579393"
	I1014 20:03:26.104379  470544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:03:26.122921  470544 main.go:141] libmachine: Using SSH client type: native
	I1014 20:03:26.123167  470544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32903 <nil> <nil>}
	I1014 20:03:26.123185  470544 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-579393 && echo "ha-579393" | sudo tee /etc/hostname
	I1014 20:03:26.281180  470544 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-579393
	
	I1014 20:03:26.281286  470544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:03:26.299367  470544 main.go:141] libmachine: Using SSH client type: native
	I1014 20:03:26.299579  470544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32903 <nil> <nil>}
	I1014 20:03:26.299596  470544 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-579393' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-579393/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-579393' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 20:03:26.445909  470544 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 20:03:26.445941  470544 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-413763/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-413763/.minikube}
	I1014 20:03:26.445960  470544 ubuntu.go:190] setting up certificates
	I1014 20:03:26.445974  470544 provision.go:84] configureAuth start
	I1014 20:03:26.446042  470544 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-579393
	I1014 20:03:26.463014  470544 provision.go:143] copyHostCerts
	I1014 20:03:26.463059  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem
	I1014 20:03:26.463090  470544 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem, removing ...
	I1014 20:03:26.463099  470544 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem
	I1014 20:03:26.463169  470544 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem (1078 bytes)
	I1014 20:03:26.463255  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem
	I1014 20:03:26.463272  470544 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem, removing ...
	I1014 20:03:26.463279  470544 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem
	I1014 20:03:26.463304  470544 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem (1123 bytes)
	I1014 20:03:26.463350  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem
	I1014 20:03:26.463367  470544 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem, removing ...
	I1014 20:03:26.463373  470544 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem
	I1014 20:03:26.463396  470544 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem (1675 bytes)
	I1014 20:03:26.463447  470544 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca-key.pem org=jenkins.ha-579393 san=[127.0.0.1 192.168.49.2 ha-579393 localhost minikube]
	I1014 20:03:26.617910  470544 provision.go:177] copyRemoteCerts
	I1014 20:03:26.617976  470544 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 20:03:26.618022  470544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:03:26.636120  470544 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:03:26.739380  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1014 20:03:26.739452  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 20:03:26.759232  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1014 20:03:26.759293  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1014 20:03:26.778271  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1014 20:03:26.778338  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1014 20:03:26.796388  470544 provision.go:87] duration metric: took 350.39932ms to configureAuth
	I1014 20:03:26.796420  470544 ubuntu.go:206] setting minikube options for container-runtime
	I1014 20:03:26.796596  470544 config.go:182] Loaded profile config "ha-579393": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:03:26.796705  470544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:03:26.816035  470544 main.go:141] libmachine: Using SSH client type: native
	I1014 20:03:26.816243  470544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32903 <nil> <nil>}
	I1014 20:03:26.816259  470544 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 20:03:27.082126  470544 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 20:03:27.082156  470544 machine.go:96] duration metric: took 4.147772563s to provisionDockerMachine
	I1014 20:03:27.082171  470544 client.go:171] duration metric: took 9.752806403s to LocalClient.Create
	I1014 20:03:27.082197  470544 start.go:167] duration metric: took 9.752866506s to libmachine.API.Create "ha-579393"
	I1014 20:03:27.082205  470544 start.go:293] postStartSetup for "ha-579393" (driver="docker")
	I1014 20:03:27.082215  470544 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 20:03:27.082274  470544 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 20:03:27.082316  470544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:03:27.101460  470544 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:03:27.208078  470544 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 20:03:27.212053  470544 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1014 20:03:27.212086  470544 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1014 20:03:27.212100  470544 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-413763/.minikube/addons for local assets ...
	I1014 20:03:27.212182  470544 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-413763/.minikube/files for local assets ...
	I1014 20:03:27.212277  470544 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem -> 4173732.pem in /etc/ssl/certs
	I1014 20:03:27.212288  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem -> /etc/ssl/certs/4173732.pem
	I1014 20:03:27.212396  470544 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 20:03:27.220472  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem --> /etc/ssl/certs/4173732.pem (1708 bytes)
	I1014 20:03:27.241576  470544 start.go:296] duration metric: took 159.355524ms for postStartSetup
	I1014 20:03:27.241976  470544 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-579393
	I1014 20:03:27.259468  470544 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/config.json ...
	I1014 20:03:27.259849  470544 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1014 20:03:27.259907  470544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:03:27.277799  470544 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:03:27.378323  470544 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1014 20:03:27.383519  470544 start.go:128] duration metric: took 10.056234444s to createHost
	I1014 20:03:27.383548  470544 start.go:83] releasing machines lock for "ha-579393", held for 10.056356237s
	I1014 20:03:27.383629  470544 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-579393
	I1014 20:03:27.401699  470544 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 20:03:27.401709  470544 ssh_runner.go:195] Run: cat /version.json
	I1014 20:03:27.401815  470544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:03:27.401838  470544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:03:27.420176  470544 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:03:27.421057  470544 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:03:27.574708  470544 ssh_runner.go:195] Run: systemctl --version
	I1014 20:03:27.581776  470544 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 20:03:27.618049  470544 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 20:03:27.622981  470544 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 20:03:27.623059  470544 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 20:03:27.650696  470544 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1014 20:03:27.650726  470544 start.go:495] detecting cgroup driver to use...
	I1014 20:03:27.650795  470544 detect.go:190] detected "systemd" cgroup driver on host os
	I1014 20:03:27.650860  470544 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 20:03:27.668397  470544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 20:03:27.681391  470544 docker.go:218] disabling cri-docker service (if available) ...
	I1014 20:03:27.681446  470544 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 20:03:27.698246  470544 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 20:03:27.716479  470544 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 20:03:27.798818  470544 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 20:03:27.884317  470544 docker.go:234] disabling docker service ...
	I1014 20:03:27.884384  470544 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 20:03:27.905126  470544 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 20:03:27.918827  470544 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 20:03:28.002081  470544 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 20:03:28.084842  470544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 20:03:28.098220  470544 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 20:03:28.113305  470544 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1014 20:03:28.113364  470544 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:03:28.124477  470544 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1014 20:03:28.124559  470544 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:03:28.134261  470544 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:03:28.144071  470544 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:03:28.154359  470544 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 20:03:28.163636  470544 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:03:28.173644  470544 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:03:28.188326  470544 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:03:28.198228  470544 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 20:03:28.206234  470544 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 20:03:28.214019  470544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 20:03:28.295010  470544 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 20:03:28.401206  470544 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 20:03:28.401272  470544 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 20:03:28.405522  470544 start.go:563] Will wait 60s for crictl version
	I1014 20:03:28.405585  470544 ssh_runner.go:195] Run: which crictl
	I1014 20:03:28.409373  470544 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1014 20:03:28.435266  470544 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1014 20:03:28.435335  470544 ssh_runner.go:195] Run: crio --version
	I1014 20:03:28.465834  470544 ssh_runner.go:195] Run: crio --version
	I1014 20:03:28.497274  470544 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1014 20:03:28.498593  470544 cli_runner.go:164] Run: docker network inspect ha-579393 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1014 20:03:28.517029  470544 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1014 20:03:28.521498  470544 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 20:03:28.532817  470544 kubeadm.go:883] updating cluster {Name:ha-579393 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-579393 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 20:03:28.532940  470544 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 20:03:28.532992  470544 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 20:03:28.565925  470544 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 20:03:28.565951  470544 crio.go:433] Images already preloaded, skipping extraction
	I1014 20:03:28.566006  470544 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 20:03:28.592978  470544 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 20:03:28.593003  470544 cache_images.go:85] Images are preloaded, skipping loading
	I1014 20:03:28.593011  470544 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1014 20:03:28.593109  470544 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-579393 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-579393 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 20:03:28.593172  470544 ssh_runner.go:195] Run: crio config
	I1014 20:03:28.638570  470544 cni.go:84] Creating CNI manager for ""
	I1014 20:03:28.638590  470544 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1014 20:03:28.638604  470544 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1014 20:03:28.638626  470544 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-579393 NodeName:ha-579393 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1014 20:03:28.638736  470544 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-579393"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 20:03:28.638778  470544 kube-vip.go:115] generating kube-vip config ...
	I1014 20:03:28.638827  470544 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1014 20:03:28.651221  470544 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1014 20:03:28.651322  470544 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1014 20:03:28.651371  470544 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1014 20:03:28.659733  470544 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 20:03:28.659825  470544 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1014 20:03:28.667977  470544 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1014 20:03:28.681172  470544 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 20:03:28.697239  470544 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1014 20:03:28.710080  470544 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I1014 20:03:28.724688  470544 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1014 20:03:28.728568  470544 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 20:03:28.738656  470544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 20:03:28.817749  470544 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 20:03:28.841528  470544 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393 for IP: 192.168.49.2
	I1014 20:03:28.841566  470544 certs.go:195] generating shared ca certs ...
	I1014 20:03:28.841587  470544 certs.go:227] acquiring lock for ca certs: {Name:mk332cc83f1e1747534fc81e896bed94f24941d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:03:28.841727  470544 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.key
	I1014 20:03:28.841805  470544 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.key
	I1014 20:03:28.841821  470544 certs.go:257] generating profile certs ...
	I1014 20:03:28.841874  470544 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/client.key
	I1014 20:03:28.841897  470544 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/client.crt with IP's: []
	I1014 20:03:29.018063  470544 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/client.crt ...
	I1014 20:03:29.018101  470544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/client.crt: {Name:mk8b90bc05b294b6c05e808012d45472c3093f67 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:03:29.018299  470544 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/client.key ...
	I1014 20:03:29.018321  470544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/client.key: {Name:mk4670db425ebf46f3bf4968573343a975480683 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:03:29.018407  470544 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key.6d4a1612
	I1014 20:03:29.018424  470544 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt.6d4a1612 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I1014 20:03:29.208082  470544 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt.6d4a1612 ...
	I1014 20:03:29.208118  470544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt.6d4a1612: {Name:mk2e48e06bd7a0fd2aa3ea9def795ac03bded956 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:03:29.208287  470544 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key.6d4a1612 ...
	I1014 20:03:29.208300  470544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key.6d4a1612: {Name:mkc6fe9b4a3330b4fa61a71beeb137e948294421 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:03:29.209199  470544 certs.go:382] copying /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt.6d4a1612 -> /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt
	I1014 20:03:29.209315  470544 certs.go:386] copying /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key.6d4a1612 -> /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key
	I1014 20:03:29.209373  470544 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.key
	I1014 20:03:29.209389  470544 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.crt with IP's: []
	I1014 20:03:29.349734  470544 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.crt ...
	I1014 20:03:29.349788  470544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.crt: {Name:mk3c38e66fa21f9bf9f031b0b611fbb1d8c4882a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:03:29.349962  470544 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.key ...
	I1014 20:03:29.349973  470544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.key: {Name:mke28e4de33c7a0d50feb0b1335c5cd9e94d81db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:03:29.350047  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1014 20:03:29.350064  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1014 20:03:29.350075  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1014 20:03:29.350087  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1014 20:03:29.350099  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1014 20:03:29.350109  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1014 20:03:29.350122  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1014 20:03:29.350132  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1014 20:03:29.350183  470544 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/417373.pem (1338 bytes)
	W1014 20:03:29.350228  470544 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-413763/.minikube/certs/417373_empty.pem, impossibly tiny 0 bytes
	I1014 20:03:29.350237  470544 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca-key.pem (1675 bytes)
	I1014 20:03:29.350258  470544 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem (1078 bytes)
	I1014 20:03:29.350280  470544 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem (1123 bytes)
	I1014 20:03:29.350300  470544 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem (1675 bytes)
	I1014 20:03:29.350336  470544 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem (1708 bytes)
	I1014 20:03:29.350360  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:03:29.350373  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/417373.pem -> /usr/share/ca-certificates/417373.pem
	I1014 20:03:29.350387  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem -> /usr/share/ca-certificates/4173732.pem
	I1014 20:03:29.350927  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 20:03:29.369482  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1014 20:03:29.386797  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 20:03:29.404413  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1014 20:03:29.421955  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1014 20:03:29.439808  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1014 20:03:29.457222  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 20:03:29.475143  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1014 20:03:29.493300  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 20:03:29.513957  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/certs/417373.pem --> /usr/share/ca-certificates/417373.pem (1338 bytes)
	I1014 20:03:29.535163  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem --> /usr/share/ca-certificates/4173732.pem (1708 bytes)
	I1014 20:03:29.554358  470544 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 20:03:29.567671  470544 ssh_runner.go:195] Run: openssl version
	I1014 20:03:29.574116  470544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4173732.pem && ln -fs /usr/share/ca-certificates/4173732.pem /etc/ssl/certs/4173732.pem"
	I1014 20:03:29.582980  470544 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4173732.pem
	I1014 20:03:29.586713  470544 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 19:32 /usr/share/ca-certificates/4173732.pem
	I1014 20:03:29.586836  470544 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4173732.pem
	I1014 20:03:29.620973  470544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4173732.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 20:03:29.629990  470544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 20:03:29.638580  470544 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:03:29.642541  470544 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 19:15 /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:03:29.642595  470544 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:03:29.677097  470544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 20:03:29.687002  470544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/417373.pem && ln -fs /usr/share/ca-certificates/417373.pem /etc/ssl/certs/417373.pem"
	I1014 20:03:29.696267  470544 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/417373.pem
	I1014 20:03:29.700535  470544 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 19:32 /usr/share/ca-certificates/417373.pem
	I1014 20:03:29.700593  470544 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/417373.pem
	I1014 20:03:29.734895  470544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/417373.pem /etc/ssl/certs/51391683.0"
	I1014 20:03:29.744295  470544 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 20:03:29.748240  470544 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1014 20:03:29.748305  470544 kubeadm.go:400] StartCluster: {Name:ha-579393 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-579393 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 20:03:29.748380  470544 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 20:03:29.748448  470544 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 20:03:29.777054  470544 cri.go:89] found id: ""
	I1014 20:03:29.777134  470544 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 20:03:29.785507  470544 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 20:03:29.793651  470544 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1014 20:03:29.793711  470544 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 20:03:29.801881  470544 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 20:03:29.801906  470544 kubeadm.go:157] found existing configuration files:
	
	I1014 20:03:29.801956  470544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 20:03:29.809948  470544 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 20:03:29.810011  470544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 20:03:29.817979  470544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 20:03:29.825985  470544 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 20:03:29.826064  470544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 20:03:29.833833  470544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 20:03:29.842078  470544 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 20:03:29.842149  470544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 20:03:29.850122  470544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 20:03:29.858250  470544 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 20:03:29.858312  470544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 20:03:29.866004  470544 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1014 20:03:29.905901  470544 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1014 20:03:29.906013  470544 kubeadm.go:318] [preflight] Running pre-flight checks
	I1014 20:03:29.928412  470544 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1014 20:03:29.928498  470544 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1014 20:03:29.928541  470544 kubeadm.go:318] OS: Linux
	I1014 20:03:29.928583  470544 kubeadm.go:318] CGROUPS_CPU: enabled
	I1014 20:03:29.928652  470544 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1014 20:03:29.928730  470544 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1014 20:03:29.928805  470544 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1014 20:03:29.928849  470544 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1014 20:03:29.928892  470544 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1014 20:03:29.928935  470544 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1014 20:03:29.928973  470544 kubeadm.go:318] CGROUPS_IO: enabled
	I1014 20:03:29.989181  470544 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1014 20:03:29.989342  470544 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1014 20:03:29.989457  470544 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1014 20:03:29.997476  470544 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1014 20:03:30.000428  470544 out.go:252]   - Generating certificates and keys ...
	I1014 20:03:30.000531  470544 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1014 20:03:30.000656  470544 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1014 20:03:30.367367  470544 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1014 20:03:30.888441  470544 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1014 20:03:31.416284  470544 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1014 20:03:31.486302  470544 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1014 20:03:32.293304  470544 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1014 20:03:32.293457  470544 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [ha-579393 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1014 20:03:32.436942  470544 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1014 20:03:32.437134  470544 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [ha-579393 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1014 20:03:32.740861  470544 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1014 20:03:32.874202  470544 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1014 20:03:33.330864  470544 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1014 20:03:33.330961  470544 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1014 20:03:33.434687  470544 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1014 20:03:33.590351  470544 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1014 20:03:33.928031  470544 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1014 20:03:34.042691  470544 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1014 20:03:34.576186  470544 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1014 20:03:34.576637  470544 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1014 20:03:34.579016  470544 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1014 20:03:34.581440  470544 out.go:252]   - Booting up control plane ...
	I1014 20:03:34.581593  470544 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1014 20:03:34.581712  470544 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1014 20:03:34.581832  470544 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1014 20:03:34.595204  470544 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1014 20:03:34.595404  470544 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1014 20:03:34.601919  470544 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1014 20:03:34.602142  470544 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1014 20:03:34.602243  470544 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1014 20:03:34.699612  470544 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1014 20:03:34.699737  470544 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1014 20:03:35.200483  470544 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 500.970561ms
	I1014 20:03:35.205501  470544 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1014 20:03:35.205636  470544 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1014 20:03:35.205873  470544 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1014 20:03:35.205987  470544 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1014 20:07:35.206930  470544 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000676337s
	I1014 20:07:35.207172  470544 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000720088s
	I1014 20:07:35.207359  470544 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000743417s
	I1014 20:07:35.207371  470544 kubeadm.go:318] 
	I1014 20:07:35.207694  470544 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1014 20:07:35.208049  470544 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1014 20:07:35.208276  470544 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1014 20:07:35.208532  470544 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1014 20:07:35.208786  470544 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1014 20:07:35.209074  470544 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1014 20:07:35.209100  470544 kubeadm.go:318] 
	I1014 20:07:35.211976  470544 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1014 20:07:35.212145  470544 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1014 20:07:35.212706  470544 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1014 20:07:35.212843  470544 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	W1014 20:07:35.212972  470544 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-579393 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-579393 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 500.970561ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000676337s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000720088s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000743417s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1014 20:07:35.213050  470544 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1014 20:07:37.966951  470544 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.753873381s)
	I1014 20:07:37.967030  470544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 20:07:37.980538  470544 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1014 20:07:37.980613  470544 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 20:07:37.988822  470544 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 20:07:37.988844  470544 kubeadm.go:157] found existing configuration files:
	
	I1014 20:07:37.988897  470544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 20:07:37.996970  470544 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 20:07:37.997051  470544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 20:07:38.004797  470544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 20:07:38.012635  470544 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 20:07:38.012702  470544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 20:07:38.020175  470544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 20:07:38.028386  470544 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 20:07:38.028440  470544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 20:07:38.036154  470544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 20:07:38.044027  470544 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 20:07:38.044088  470544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 20:07:38.051422  470544 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1014 20:07:38.110505  470544 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1014 20:07:38.170186  470544 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1014 20:11:40.721242  470544 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1014 20:11:40.721491  470544 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1014 20:11:40.724650  470544 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1014 20:11:40.724789  470544 kubeadm.go:318] [preflight] Running pre-flight checks
	I1014 20:11:40.724937  470544 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1014 20:11:40.725018  470544 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1014 20:11:40.725068  470544 kubeadm.go:318] OS: Linux
	I1014 20:11:40.725125  470544 kubeadm.go:318] CGROUPS_CPU: enabled
	I1014 20:11:40.725181  470544 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1014 20:11:40.725248  470544 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1014 20:11:40.725310  470544 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1014 20:11:40.725365  470544 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1014 20:11:40.725423  470544 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1014 20:11:40.725473  470544 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1014 20:11:40.725534  470544 kubeadm.go:318] CGROUPS_IO: enabled
	I1014 20:11:40.725639  470544 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1014 20:11:40.725782  470544 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1014 20:11:40.725977  470544 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1014 20:11:40.726087  470544 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1014 20:11:40.728584  470544 out.go:252]   - Generating certificates and keys ...
	I1014 20:11:40.728668  470544 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1014 20:11:40.728723  470544 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1014 20:11:40.728820  470544 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1014 20:11:40.728895  470544 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1014 20:11:40.728974  470544 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1014 20:11:40.729051  470544 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1014 20:11:40.729150  470544 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1014 20:11:40.729214  470544 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1014 20:11:40.729282  470544 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1014 20:11:40.729340  470544 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1014 20:11:40.729378  470544 kubeadm.go:318] [certs] Using the existing "sa" key
	I1014 20:11:40.729422  470544 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1014 20:11:40.729466  470544 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1014 20:11:40.729531  470544 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1014 20:11:40.729604  470544 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1014 20:11:40.729710  470544 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1014 20:11:40.729805  470544 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1014 20:11:40.729913  470544 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1014 20:11:40.730020  470544 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1014 20:11:40.731279  470544 out.go:252]   - Booting up control plane ...
	I1014 20:11:40.731376  470544 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1014 20:11:40.731472  470544 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1014 20:11:40.731563  470544 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1014 20:11:40.731676  470544 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1014 20:11:40.731820  470544 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1014 20:11:40.731960  470544 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1014 20:11:40.732060  470544 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1014 20:11:40.732099  470544 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1014 20:11:40.732241  470544 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1014 20:11:40.732368  470544 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1014 20:11:40.732459  470544 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001170855s
	I1014 20:11:40.732550  470544 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1014 20:11:40.732649  470544 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1014 20:11:40.732789  470544 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1014 20:11:40.732875  470544 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1014 20:11:40.732961  470544 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000981646s
	I1014 20:11:40.733076  470544 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001242346s
	I1014 20:11:40.733142  470544 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001205382s
	I1014 20:11:40.733157  470544 kubeadm.go:318] 
	I1014 20:11:40.733272  470544 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1014 20:11:40.733349  470544 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1014 20:11:40.733417  470544 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1014 20:11:40.733491  470544 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1014 20:11:40.733553  470544 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1014 20:11:40.733641  470544 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1014 20:11:40.733684  470544 kubeadm.go:318] 
	I1014 20:11:40.733748  470544 kubeadm.go:402] duration metric: took 8m10.985445817s to StartCluster
	I1014 20:11:40.733824  470544 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 20:11:40.733881  470544 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 20:11:40.762474  470544 cri.go:89] found id: ""
	I1014 20:11:40.762524  470544 logs.go:282] 0 containers: []
	W1014 20:11:40.762538  470544 logs.go:284] No container was found matching "kube-apiserver"
	I1014 20:11:40.762545  470544 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 20:11:40.762602  470544 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 20:11:40.789961  470544 cri.go:89] found id: ""
	I1014 20:11:40.789989  470544 logs.go:282] 0 containers: []
	W1014 20:11:40.789999  470544 logs.go:284] No container was found matching "etcd"
	I1014 20:11:40.790007  470544 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 20:11:40.790062  470544 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 20:11:40.817095  470544 cri.go:89] found id: ""
	I1014 20:11:40.817128  470544 logs.go:282] 0 containers: []
	W1014 20:11:40.817141  470544 logs.go:284] No container was found matching "coredns"
	I1014 20:11:40.817148  470544 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 20:11:40.817206  470544 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 20:11:40.843942  470544 cri.go:89] found id: ""
	I1014 20:11:40.843974  470544 logs.go:282] 0 containers: []
	W1014 20:11:40.843984  470544 logs.go:284] No container was found matching "kube-scheduler"
	I1014 20:11:40.843991  470544 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 20:11:40.844054  470544 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 20:11:40.870262  470544 cri.go:89] found id: ""
	I1014 20:11:40.870289  470544 logs.go:282] 0 containers: []
	W1014 20:11:40.870299  470544 logs.go:284] No container was found matching "kube-proxy"
	I1014 20:11:40.870308  470544 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 20:11:40.870377  470544 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 20:11:40.896558  470544 cri.go:89] found id: ""
	I1014 20:11:40.896588  470544 logs.go:282] 0 containers: []
	W1014 20:11:40.896597  470544 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 20:11:40.896604  470544 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 20:11:40.896660  470544 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 20:11:40.923171  470544 cri.go:89] found id: ""
	I1014 20:11:40.923202  470544 logs.go:282] 0 containers: []
	W1014 20:11:40.923214  470544 logs.go:284] No container was found matching "kindnet"
	I1014 20:11:40.923225  470544 logs.go:123] Gathering logs for kubelet ...
	I1014 20:11:40.923237  470544 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 20:11:40.991897  470544 logs.go:123] Gathering logs for dmesg ...
	I1014 20:11:40.991944  470544 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 20:11:41.010371  470544 logs.go:123] Gathering logs for describe nodes ...
	I1014 20:11:41.010404  470544 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 20:11:41.071387  470544 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 20:11:41.064404    2582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:11:41.064977    2582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:11:41.066171    2582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:11:41.066648    2582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:11:41.068287    2582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 20:11:41.064404    2582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:11:41.064977    2582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:11:41.066171    2582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:11:41.066648    2582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:11:41.068287    2582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 20:11:41.071407  470544 logs.go:123] Gathering logs for CRI-O ...
	I1014 20:11:41.071419  470544 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 20:11:41.133347  470544 logs.go:123] Gathering logs for container status ...
	I1014 20:11:41.133392  470544 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1014 20:11:41.166639  470544 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001170855s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000981646s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001242346s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001205382s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1014 20:11:41.166697  470544 out.go:285] * 
	W1014 20:11:41.166793  470544 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001170855s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000981646s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001242346s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001205382s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1014 20:11:41.166813  470544 out.go:285] * 
	W1014 20:11:41.168436  470544 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 20:11:41.172303  470544 out.go:203] 
	W1014 20:11:41.173765  470544 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001170855s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000981646s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001242346s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001205382s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1014 20:11:41.173801  470544 out.go:285] * 
	I1014 20:11:41.176311  470544 out.go:203] 
	
	
	==> CRI-O <==
	Oct 14 20:13:08 ha-579393 crio[778]: time="2025-10-14T20:13:08.480033374Z" level=info msg="createCtr: removing container 018575602c57ffad52b1c85d2a85ab388b8490a6071902e6c324528074a19f73" id=c2685434-8e39-490a-9a29-67cc288e8fd0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:13:08 ha-579393 crio[778]: time="2025-10-14T20:13:08.480077995Z" level=info msg="createCtr: deleting container 018575602c57ffad52b1c85d2a85ab388b8490a6071902e6c324528074a19f73 from storage" id=c2685434-8e39-490a-9a29-67cc288e8fd0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:13:08 ha-579393 crio[778]: time="2025-10-14T20:13:08.482906147Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-ha-579393_kube-system_06869f9c490a5fffb50940ac23939d18_0" id=c2685434-8e39-490a-9a29-67cc288e8fd0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:13:10 ha-579393 crio[778]: time="2025-10-14T20:13:10.453340908Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=fc3db041-30ba-4ea4-91c9-263af93e3f4c name=/runtime.v1.ImageService/ImageStatus
	Oct 14 20:13:10 ha-579393 crio[778]: time="2025-10-14T20:13:10.453422912Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=f7739051-0a80-4b38-bcfa-7f3e19340dd0 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 20:13:10 ha-579393 crio[778]: time="2025-10-14T20:13:10.454245121Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=4a6115c9-bcee-4f37-86ae-ac8a0cbe50ae name=/runtime.v1.ImageService/ImageStatus
	Oct 14 20:13:10 ha-579393 crio[778]: time="2025-10-14T20:13:10.454281251Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=d263eed7-1f54-448e-9f29-81542c68d11b name=/runtime.v1.ImageService/ImageStatus
	Oct 14 20:13:10 ha-579393 crio[778]: time="2025-10-14T20:13:10.455657405Z" level=info msg="Creating container: kube-system/kube-scheduler-ha-579393/kube-scheduler" id=e32b6156-17c6-4402-8950-e4d2f9d0f6d5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:13:10 ha-579393 crio[778]: time="2025-10-14T20:13:10.455797492Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-579393/kube-controller-manager" id=3af1e8e1-bda9-4e3c-aac9-faa5acf750a7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:13:10 ha-579393 crio[778]: time="2025-10-14T20:13:10.455895265Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:13:10 ha-579393 crio[778]: time="2025-10-14T20:13:10.455984585Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:13:10 ha-579393 crio[778]: time="2025-10-14T20:13:10.460313441Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:13:10 ha-579393 crio[778]: time="2025-10-14T20:13:10.46080845Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:13:10 ha-579393 crio[778]: time="2025-10-14T20:13:10.462620275Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:13:10 ha-579393 crio[778]: time="2025-10-14T20:13:10.463540928Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:13:10 ha-579393 crio[778]: time="2025-10-14T20:13:10.481277661Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=e32b6156-17c6-4402-8950-e4d2f9d0f6d5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:13:10 ha-579393 crio[778]: time="2025-10-14T20:13:10.482411037Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=3af1e8e1-bda9-4e3c-aac9-faa5acf750a7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:13:10 ha-579393 crio[778]: time="2025-10-14T20:13:10.483425157Z" level=info msg="createCtr: deleting container ID 2559defe78404da161f15c0378570771ac93c99e54de5025a93bb1fb617f4c21 from idIndex" id=e32b6156-17c6-4402-8950-e4d2f9d0f6d5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:13:10 ha-579393 crio[778]: time="2025-10-14T20:13:10.483466718Z" level=info msg="createCtr: removing container 2559defe78404da161f15c0378570771ac93c99e54de5025a93bb1fb617f4c21" id=e32b6156-17c6-4402-8950-e4d2f9d0f6d5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:13:10 ha-579393 crio[778]: time="2025-10-14T20:13:10.483515818Z" level=info msg="createCtr: deleting container 2559defe78404da161f15c0378570771ac93c99e54de5025a93bb1fb617f4c21 from storage" id=e32b6156-17c6-4402-8950-e4d2f9d0f6d5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:13:10 ha-579393 crio[778]: time="2025-10-14T20:13:10.484251179Z" level=info msg="createCtr: deleting container ID 3d1438db36cf6083982a14308f3b3714448928a48af832622e1066d81b9d28a9 from idIndex" id=3af1e8e1-bda9-4e3c-aac9-faa5acf750a7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:13:10 ha-579393 crio[778]: time="2025-10-14T20:13:10.484307314Z" level=info msg="createCtr: removing container 3d1438db36cf6083982a14308f3b3714448928a48af832622e1066d81b9d28a9" id=3af1e8e1-bda9-4e3c-aac9-faa5acf750a7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:13:10 ha-579393 crio[778]: time="2025-10-14T20:13:10.484346692Z" level=info msg="createCtr: deleting container 3d1438db36cf6083982a14308f3b3714448928a48af832622e1066d81b9d28a9 from storage" id=3af1e8e1-bda9-4e3c-aac9-faa5acf750a7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:13:10 ha-579393 crio[778]: time="2025-10-14T20:13:10.487513342Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-ha-579393_kube-system_8c15ab9dd5834e64ae44874faddf585d_0" id=e32b6156-17c6-4402-8950-e4d2f9d0f6d5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:13:10 ha-579393 crio[778]: time="2025-10-14T20:13:10.48912498Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-579393_kube-system_514451ea1eb9e52e24cc36daace2ea4a_0" id=3af1e8e1-bda9-4e3c-aac9-faa5acf750a7 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 20:13:12.628340    4226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:13:12.628854    4226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:13:12.630380    4226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:13:12.630841    4226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:13:12.632400    4226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 91 42 a8 46 f5 08 06
	[ +33.952077] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff aa 17 60 38 7c 63 08 06
	[  +0.961658] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ba 77 81 98 05 a1 08 06
	[  +0.043334] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 8a 2f 57 55 4c d7 08 06
	[  +4.681081] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f6 ac 51 a4 b2 5a 08 06
	[Oct14 19:04] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e6 d1 de e1 85 25 08 06
	[  +0.899784] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ae f2 fe 51 3d 95 08 06
	[  +0.040749] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 82 c6 36 89 b8 4a 08 06
	[  +4.162603] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c6 62 d5 5c 0e ef 08 06
	[ +30.801371] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 06 ae e5 28 c7 39 08 06
	[  +0.817897] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ae 5e 5e 90 62 1a 08 06
	[  +0.046959] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 96 8f 1b bd b9 9f 08 06
	[Oct14 19:05] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 56 59 c3 7c db c4 08 06
	
	
	==> kernel <==
	 20:13:12 up  2:55,  0 user,  load average: 0.36, 0.13, 0.51
	Linux ha-579393 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 14 20:13:08 ha-579393 kubelet[1963]: E1014 20:13:08.483283    1963 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 14 20:13:08 ha-579393 kubelet[1963]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 14 20:13:08 ha-579393 kubelet[1963]:  > podSandboxID="26eabc9a05c338cff1ebd4ea1b580692dcb1accc6b0e23f61f6a228d1f73adce"
	Oct 14 20:13:08 ha-579393 kubelet[1963]: E1014 20:13:08.483427    1963 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 14 20:13:08 ha-579393 kubelet[1963]:         container kube-apiserver start failed in pod kube-apiserver-ha-579393_kube-system(06869f9c490a5fffb50940ac23939d18): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 14 20:13:08 ha-579393 kubelet[1963]:  > logger="UnhandledError"
	Oct 14 20:13:08 ha-579393 kubelet[1963]: E1014 20:13:08.483475    1963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-ha-579393" podUID="06869f9c490a5fffb50940ac23939d18"
	Oct 14 20:13:08 ha-579393 kubelet[1963]: E1014 20:13:08.641737    1963 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-579393.186e746019ae0b94  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-579393,UID:ha-579393,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-579393 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-579393,},FirstTimestamp:2025-10-14 20:07:40.444961684 +0000 UTC m=+0.732526825,LastTimestamp:2025-10-14 20:07:40.444961684 +0000 UTC m=+0.732526825,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-579393,}"
	Oct 14 20:13:10 ha-579393 kubelet[1963]: E1014 20:13:10.452890    1963 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-579393\" not found" node="ha-579393"
	Oct 14 20:13:10 ha-579393 kubelet[1963]: E1014 20:13:10.453030    1963 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-579393\" not found" node="ha-579393"
	Oct 14 20:13:10 ha-579393 kubelet[1963]: E1014 20:13:10.474073    1963 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-579393\" not found"
	Oct 14 20:13:10 ha-579393 kubelet[1963]: E1014 20:13:10.487830    1963 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 14 20:13:10 ha-579393 kubelet[1963]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 14 20:13:10 ha-579393 kubelet[1963]:  > podSandboxID="d0a8c2929974ece2a9096ac441dce40bed26c1b0ec13fe00bf80ae77bedc2f7c"
	Oct 14 20:13:10 ha-579393 kubelet[1963]: E1014 20:13:10.487965    1963 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 14 20:13:10 ha-579393 kubelet[1963]:         container kube-scheduler start failed in pod kube-scheduler-ha-579393_kube-system(8c15ab9dd5834e64ae44874faddf585d): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 14 20:13:10 ha-579393 kubelet[1963]:  > logger="UnhandledError"
	Oct 14 20:13:10 ha-579393 kubelet[1963]: E1014 20:13:10.488012    1963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-ha-579393" podUID="8c15ab9dd5834e64ae44874faddf585d"
	Oct 14 20:13:10 ha-579393 kubelet[1963]: E1014 20:13:10.489414    1963 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 14 20:13:10 ha-579393 kubelet[1963]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 14 20:13:10 ha-579393 kubelet[1963]:  > podSandboxID="aaede030549f8967d5aa233537563148ce2bbd3af1fde92787bd937fe5f1c93d"
	Oct 14 20:13:10 ha-579393 kubelet[1963]: E1014 20:13:10.489532    1963 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 14 20:13:10 ha-579393 kubelet[1963]:         container kube-controller-manager start failed in pod kube-controller-manager-ha-579393_kube-system(514451ea1eb9e52e24cc36daace2ea4a): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 14 20:13:10 ha-579393 kubelet[1963]:  > logger="UnhandledError"
	Oct 14 20:13:10 ha-579393 kubelet[1963]: E1014 20:13:10.489571    1963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-579393" podUID="514451ea1eb9e52e24cc36daace2ea4a"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-579393 -n ha-579393
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-579393 -n ha-579393: exit status 6 (303.549239ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1014 20:13:13.014490  481164 status.go:458] kubeconfig endpoint: get endpoint: "ha-579393" does not appear in /home/jenkins/minikube-integration/21409-413763/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "ha-579393" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (1.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (54.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-579393 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-579393 node start m02 --alsologtostderr -v 5: exit status 85 (59.903078ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 20:13:13.076627  481279 out.go:360] Setting OutFile to fd 1 ...
	I1014 20:13:13.076825  481279 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:13:13.076837  481279 out.go:374] Setting ErrFile to fd 2...
	I1014 20:13:13.076841  481279 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:13:13.077069  481279 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-413763/.minikube/bin
	I1014 20:13:13.077336  481279 mustload.go:65] Loading cluster: ha-579393
	I1014 20:13:13.077672  481279 config.go:182] Loaded profile config "ha-579393": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:13:13.079650  481279 out.go:203] 
	W1014 20:13:13.080879  481279 out.go:285] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W1014 20:13:13.080893  481279 out.go:285] * 
	* 
	W1014 20:13:13.084098  481279 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 20:13:13.085511  481279 out.go:203] 

                                                
                                                
** /stderr **
ha_test.go:424: I1014 20:13:13.076627  481279 out.go:360] Setting OutFile to fd 1 ...
I1014 20:13:13.076825  481279 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1014 20:13:13.076837  481279 out.go:374] Setting ErrFile to fd 2...
I1014 20:13:13.076841  481279 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1014 20:13:13.077069  481279 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-413763/.minikube/bin
I1014 20:13:13.077336  481279 mustload.go:65] Loading cluster: ha-579393
I1014 20:13:13.077672  481279 config.go:182] Loaded profile config "ha-579393": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1014 20:13:13.079650  481279 out.go:203] 
W1014 20:13:13.080879  481279 out.go:285] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
W1014 20:13:13.080893  481279 out.go:285] * 
* 
W1014 20:13:13.084098  481279 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│    * Please also attach the following file to the GitHub issue:                             │
│    * - /tmp/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log                    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│    * Please also attach the following file to the GitHub issue:                             │
│    * - /tmp/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log                    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1014 20:13:13.085511  481279 out.go:203] 

                                                
                                                
ha_test.go:425: secondary control-plane node start returned an error. args "out/minikube-linux-amd64 -p ha-579393 node start m02 --alsologtostderr -v 5": exit status 85
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-579393 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-579393 status --alsologtostderr -v 5: exit status 6 (295.347139ms)

                                                
                                                
-- stdout --
	ha-579393
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 20:13:13.135490  481291 out.go:360] Setting OutFile to fd 1 ...
	I1014 20:13:13.135812  481291 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:13:13.135825  481291 out.go:374] Setting ErrFile to fd 2...
	I1014 20:13:13.135831  481291 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:13:13.136067  481291 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-413763/.minikube/bin
	I1014 20:13:13.136253  481291 out.go:368] Setting JSON to false
	I1014 20:13:13.136292  481291 mustload.go:65] Loading cluster: ha-579393
	I1014 20:13:13.136385  481291 notify.go:220] Checking for updates...
	I1014 20:13:13.136671  481291 config.go:182] Loaded profile config "ha-579393": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:13:13.136690  481291 status.go:174] checking status of ha-579393 ...
	I1014 20:13:13.137197  481291 cli_runner.go:164] Run: docker container inspect ha-579393 --format={{.State.Status}}
	I1014 20:13:13.155722  481291 status.go:371] ha-579393 host status = "Running" (err=<nil>)
	I1014 20:13:13.155774  481291 host.go:66] Checking if "ha-579393" exists ...
	I1014 20:13:13.156069  481291 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-579393
	I1014 20:13:13.173086  481291 host.go:66] Checking if "ha-579393" exists ...
	I1014 20:13:13.173391  481291 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1014 20:13:13.173466  481291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:13:13.192654  481291 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:13:13.293191  481291 ssh_runner.go:195] Run: systemctl --version
	I1014 20:13:13.300030  481291 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 20:13:13.313435  481291 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 20:13:13.369246  481291 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-14 20:13:13.359404077 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	E1014 20:13:13.369704  481291 status.go:458] kubeconfig endpoint: get endpoint: "ha-579393" does not appear in /home/jenkins/minikube-integration/21409-413763/kubeconfig
	I1014 20:13:13.369735  481291 api_server.go:166] Checking apiserver status ...
	I1014 20:13:13.369807  481291 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1014 20:13:13.380120  481291 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1014 20:13:13.380152  481291 status.go:463] ha-579393 apiserver status = Running (err=<nil>)
	I1014 20:13:13.380162  481291 status.go:176] ha-579393 status: &{Name:ha-579393 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1014 20:13:13.386406  417373 retry.go:31] will retry after 1.238712607s: exit status 6
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-579393 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-579393 status --alsologtostderr -v 5: exit status 6 (301.156423ms)

                                                
                                                
-- stdout --
	ha-579393
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 20:13:14.672060  481403 out.go:360] Setting OutFile to fd 1 ...
	I1014 20:13:14.672451  481403 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:13:14.672459  481403 out.go:374] Setting ErrFile to fd 2...
	I1014 20:13:14.672463  481403 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:13:14.672639  481403 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-413763/.minikube/bin
	I1014 20:13:14.672823  481403 out.go:368] Setting JSON to false
	I1014 20:13:14.672855  481403 mustload.go:65] Loading cluster: ha-579393
	I1014 20:13:14.672967  481403 notify.go:220] Checking for updates...
	I1014 20:13:14.673200  481403 config.go:182] Loaded profile config "ha-579393": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:13:14.673218  481403 status.go:174] checking status of ha-579393 ...
	I1014 20:13:14.673745  481403 cli_runner.go:164] Run: docker container inspect ha-579393 --format={{.State.Status}}
	I1014 20:13:14.692182  481403 status.go:371] ha-579393 host status = "Running" (err=<nil>)
	I1014 20:13:14.692219  481403 host.go:66] Checking if "ha-579393" exists ...
	I1014 20:13:14.692558  481403 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-579393
	I1014 20:13:14.711230  481403 host.go:66] Checking if "ha-579393" exists ...
	I1014 20:13:14.711596  481403 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1014 20:13:14.711654  481403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:13:14.729400  481403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:13:14.830420  481403 ssh_runner.go:195] Run: systemctl --version
	I1014 20:13:14.837794  481403 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 20:13:14.850438  481403 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 20:13:14.909621  481403 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-14 20:13:14.899082398 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	E1014 20:13:14.910214  481403 status.go:458] kubeconfig endpoint: get endpoint: "ha-579393" does not appear in /home/jenkins/minikube-integration/21409-413763/kubeconfig
	I1014 20:13:14.910254  481403 api_server.go:166] Checking apiserver status ...
	I1014 20:13:14.910302  481403 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1014 20:13:14.921287  481403 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1014 20:13:14.921317  481403 status.go:463] ha-579393 apiserver status = Running (err=<nil>)
	I1014 20:13:14.921328  481403 status.go:176] ha-579393 status: &{Name:ha-579393 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1014 20:13:14.927241  417373 retry.go:31] will retry after 1.376006525s: exit status 6
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-579393 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-579393 status --alsologtostderr -v 5: exit status 6 (294.923321ms)

                                                
                                                
-- stdout --
	ha-579393
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 20:13:16.347042  481516 out.go:360] Setting OutFile to fd 1 ...
	I1014 20:13:16.347190  481516 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:13:16.347198  481516 out.go:374] Setting ErrFile to fd 2...
	I1014 20:13:16.347212  481516 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:13:16.347915  481516 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-413763/.minikube/bin
	I1014 20:13:16.348184  481516 out.go:368] Setting JSON to false
	I1014 20:13:16.348220  481516 mustload.go:65] Loading cluster: ha-579393
	I1014 20:13:16.348329  481516 notify.go:220] Checking for updates...
	I1014 20:13:16.348652  481516 config.go:182] Loaded profile config "ha-579393": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:13:16.348669  481516 status.go:174] checking status of ha-579393 ...
	I1014 20:13:16.349141  481516 cli_runner.go:164] Run: docker container inspect ha-579393 --format={{.State.Status}}
	I1014 20:13:16.369369  481516 status.go:371] ha-579393 host status = "Running" (err=<nil>)
	I1014 20:13:16.369438  481516 host.go:66] Checking if "ha-579393" exists ...
	I1014 20:13:16.369838  481516 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-579393
	I1014 20:13:16.387333  481516 host.go:66] Checking if "ha-579393" exists ...
	I1014 20:13:16.387658  481516 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1014 20:13:16.387708  481516 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:13:16.405530  481516 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:13:16.507137  481516 ssh_runner.go:195] Run: systemctl --version
	I1014 20:13:16.513620  481516 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 20:13:16.526197  481516 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 20:13:16.582045  481516 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-14 20:13:16.570978066 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	E1014 20:13:16.582448  481516 status.go:458] kubeconfig endpoint: get endpoint: "ha-579393" does not appear in /home/jenkins/minikube-integration/21409-413763/kubeconfig
	I1014 20:13:16.582472  481516 api_server.go:166] Checking apiserver status ...
	I1014 20:13:16.582503  481516 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1014 20:13:16.593055  481516 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1014 20:13:16.593076  481516 status.go:463] ha-579393 apiserver status = Running (err=<nil>)
	I1014 20:13:16.593086  481516 status.go:176] ha-579393 status: &{Name:ha-579393 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1014 20:13:16.599040  417373 retry.go:31] will retry after 1.520946466s: exit status 6
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-579393 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-579393 status --alsologtostderr -v 5: exit status 6 (297.803541ms)

                                                
                                                
-- stdout --
	ha-579393
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 20:13:18.165281  481643 out.go:360] Setting OutFile to fd 1 ...
	I1014 20:13:18.165523  481643 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:13:18.165531  481643 out.go:374] Setting ErrFile to fd 2...
	I1014 20:13:18.165535  481643 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:13:18.165703  481643 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-413763/.minikube/bin
	I1014 20:13:18.165930  481643 out.go:368] Setting JSON to false
	I1014 20:13:18.165979  481643 mustload.go:65] Loading cluster: ha-579393
	I1014 20:13:18.166050  481643 notify.go:220] Checking for updates...
	I1014 20:13:18.166334  481643 config.go:182] Loaded profile config "ha-579393": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:13:18.166349  481643 status.go:174] checking status of ha-579393 ...
	I1014 20:13:18.166749  481643 cli_runner.go:164] Run: docker container inspect ha-579393 --format={{.State.Status}}
	I1014 20:13:18.187707  481643 status.go:371] ha-579393 host status = "Running" (err=<nil>)
	I1014 20:13:18.187736  481643 host.go:66] Checking if "ha-579393" exists ...
	I1014 20:13:18.188026  481643 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-579393
	I1014 20:13:18.204947  481643 host.go:66] Checking if "ha-579393" exists ...
	I1014 20:13:18.205263  481643 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1014 20:13:18.205313  481643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:13:18.223617  481643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:13:18.325685  481643 ssh_runner.go:195] Run: systemctl --version
	I1014 20:13:18.332308  481643 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 20:13:18.344666  481643 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 20:13:18.402138  481643 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-14 20:13:18.390729709 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	E1014 20:13:18.402605  481643 status.go:458] kubeconfig endpoint: get endpoint: "ha-579393" does not appear in /home/jenkins/minikube-integration/21409-413763/kubeconfig
	I1014 20:13:18.402636  481643 api_server.go:166] Checking apiserver status ...
	I1014 20:13:18.402683  481643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1014 20:13:18.413276  481643 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1014 20:13:18.413295  481643 status.go:463] ha-579393 apiserver status = Running (err=<nil>)
	I1014 20:13:18.413327  481643 status.go:176] ha-579393 status: &{Name:ha-579393 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1014 20:13:18.418797  417373 retry.go:31] will retry after 3.701806134s: exit status 6
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-579393 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-579393 status --alsologtostderr -v 5: exit status 6 (303.378225ms)

                                                
                                                
-- stdout --
	ha-579393
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 20:13:22.166350  481783 out.go:360] Setting OutFile to fd 1 ...
	I1014 20:13:22.166628  481783 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:13:22.166639  481783 out.go:374] Setting ErrFile to fd 2...
	I1014 20:13:22.166644  481783 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:13:22.166917  481783 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-413763/.minikube/bin
	I1014 20:13:22.167101  481783 out.go:368] Setting JSON to false
	I1014 20:13:22.167130  481783 mustload.go:65] Loading cluster: ha-579393
	I1014 20:13:22.167227  481783 notify.go:220] Checking for updates...
	I1014 20:13:22.167471  481783 config.go:182] Loaded profile config "ha-579393": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:13:22.167490  481783 status.go:174] checking status of ha-579393 ...
	I1014 20:13:22.167990  481783 cli_runner.go:164] Run: docker container inspect ha-579393 --format={{.State.Status}}
	I1014 20:13:22.187530  481783 status.go:371] ha-579393 host status = "Running" (err=<nil>)
	I1014 20:13:22.187581  481783 host.go:66] Checking if "ha-579393" exists ...
	I1014 20:13:22.188062  481783 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-579393
	I1014 20:13:22.207329  481783 host.go:66] Checking if "ha-579393" exists ...
	I1014 20:13:22.207809  481783 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1014 20:13:22.207868  481783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:13:22.227526  481783 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:13:22.329390  481783 ssh_runner.go:195] Run: systemctl --version
	I1014 20:13:22.335832  481783 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 20:13:22.348560  481783 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 20:13:22.407030  481783 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-14 20:13:22.395336971 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	E1014 20:13:22.407516  481783 status.go:458] kubeconfig endpoint: get endpoint: "ha-579393" does not appear in /home/jenkins/minikube-integration/21409-413763/kubeconfig
	I1014 20:13:22.407543  481783 api_server.go:166] Checking apiserver status ...
	I1014 20:13:22.407576  481783 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1014 20:13:22.417992  481783 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1014 20:13:22.418019  481783 status.go:463] ha-579393 apiserver status = Running (err=<nil>)
	I1014 20:13:22.418031  481783 status.go:176] ha-579393 status: &{Name:ha-579393 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1014 20:13:22.424437  417373 retry.go:31] will retry after 2.979659035s: exit status 6
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-579393 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-579393 status --alsologtostderr -v 5: exit status 6 (305.471765ms)

                                                
                                                
-- stdout --
	ha-579393
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 20:13:25.452437  481908 out.go:360] Setting OutFile to fd 1 ...
	I1014 20:13:25.452708  481908 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:13:25.452725  481908 out.go:374] Setting ErrFile to fd 2...
	I1014 20:13:25.452729  481908 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:13:25.452993  481908 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-413763/.minikube/bin
	I1014 20:13:25.453173  481908 out.go:368] Setting JSON to false
	I1014 20:13:25.453203  481908 mustload.go:65] Loading cluster: ha-579393
	I1014 20:13:25.453264  481908 notify.go:220] Checking for updates...
	I1014 20:13:25.453731  481908 config.go:182] Loaded profile config "ha-579393": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:13:25.453765  481908 status.go:174] checking status of ha-579393 ...
	I1014 20:13:25.454328  481908 cli_runner.go:164] Run: docker container inspect ha-579393 --format={{.State.Status}}
	I1014 20:13:25.473272  481908 status.go:371] ha-579393 host status = "Running" (err=<nil>)
	I1014 20:13:25.473298  481908 host.go:66] Checking if "ha-579393" exists ...
	I1014 20:13:25.473564  481908 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-579393
	I1014 20:13:25.491076  481908 host.go:66] Checking if "ha-579393" exists ...
	I1014 20:13:25.491531  481908 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1014 20:13:25.491608  481908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:13:25.510634  481908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:13:25.616343  481908 ssh_runner.go:195] Run: systemctl --version
	I1014 20:13:25.623055  481908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 20:13:25.636059  481908 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 20:13:25.694683  481908 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-14 20:13:25.684432528 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	E1014 20:13:25.695149  481908 status.go:458] kubeconfig endpoint: get endpoint: "ha-579393" does not appear in /home/jenkins/minikube-integration/21409-413763/kubeconfig
	I1014 20:13:25.695176  481908 api_server.go:166] Checking apiserver status ...
	I1014 20:13:25.695210  481908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1014 20:13:25.706038  481908 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1014 20:13:25.706069  481908 status.go:463] ha-579393 apiserver status = Running (err=<nil>)
	I1014 20:13:25.706083  481908 status.go:176] ha-579393 status: &{Name:ha-579393 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1014 20:13:25.712195  417373 retry.go:31] will retry after 5.982928111s: exit status 6
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-579393 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-579393 status --alsologtostderr -v 5: exit status 6 (305.80477ms)

                                                
                                                
-- stdout --
	ha-579393
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 20:13:31.746441  482064 out.go:360] Setting OutFile to fd 1 ...
	I1014 20:13:31.746732  482064 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:13:31.746743  482064 out.go:374] Setting ErrFile to fd 2...
	I1014 20:13:31.746747  482064 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:13:31.746975  482064 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-413763/.minikube/bin
	I1014 20:13:31.747170  482064 out.go:368] Setting JSON to false
	I1014 20:13:31.747201  482064 mustload.go:65] Loading cluster: ha-579393
	I1014 20:13:31.747277  482064 notify.go:220] Checking for updates...
	I1014 20:13:31.747516  482064 config.go:182] Loaded profile config "ha-579393": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:13:31.747529  482064 status.go:174] checking status of ha-579393 ...
	I1014 20:13:31.748030  482064 cli_runner.go:164] Run: docker container inspect ha-579393 --format={{.State.Status}}
	I1014 20:13:31.768177  482064 status.go:371] ha-579393 host status = "Running" (err=<nil>)
	I1014 20:13:31.768228  482064 host.go:66] Checking if "ha-579393" exists ...
	I1014 20:13:31.768534  482064 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-579393
	I1014 20:13:31.787548  482064 host.go:66] Checking if "ha-579393" exists ...
	I1014 20:13:31.787903  482064 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1014 20:13:31.787969  482064 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:13:31.806945  482064 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:13:31.909428  482064 ssh_runner.go:195] Run: systemctl --version
	I1014 20:13:31.916445  482064 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 20:13:31.930085  482064 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 20:13:31.988240  482064 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-14 20:13:31.977320791 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	E1014 20:13:31.988784  482064 status.go:458] kubeconfig endpoint: get endpoint: "ha-579393" does not appear in /home/jenkins/minikube-integration/21409-413763/kubeconfig
	I1014 20:13:31.988815  482064 api_server.go:166] Checking apiserver status ...
	I1014 20:13:31.988853  482064 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1014 20:13:31.999936  482064 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1014 20:13:31.999961  482064 status.go:463] ha-579393 apiserver status = Running (err=<nil>)
	I1014 20:13:31.999972  482064 status.go:176] ha-579393 status: &{Name:ha-579393 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1014 20:13:32.006367  417373 retry.go:31] will retry after 9.892331825s: exit status 6
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-579393 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-579393 status --alsologtostderr -v 5: exit status 6 (301.971306ms)

                                                
                                                
-- stdout --
	ha-579393
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 20:13:41.952788  482230 out.go:360] Setting OutFile to fd 1 ...
	I1014 20:13:41.952993  482230 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:13:41.953002  482230 out.go:374] Setting ErrFile to fd 2...
	I1014 20:13:41.953006  482230 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:13:41.953219  482230 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-413763/.minikube/bin
	I1014 20:13:41.953402  482230 out.go:368] Setting JSON to false
	I1014 20:13:41.953429  482230 mustload.go:65] Loading cluster: ha-579393
	I1014 20:13:41.953456  482230 notify.go:220] Checking for updates...
	I1014 20:13:41.953881  482230 config.go:182] Loaded profile config "ha-579393": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:13:41.953904  482230 status.go:174] checking status of ha-579393 ...
	I1014 20:13:41.954419  482230 cli_runner.go:164] Run: docker container inspect ha-579393 --format={{.State.Status}}
	I1014 20:13:41.973581  482230 status.go:371] ha-579393 host status = "Running" (err=<nil>)
	I1014 20:13:41.973609  482230 host.go:66] Checking if "ha-579393" exists ...
	I1014 20:13:41.973951  482230 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-579393
	I1014 20:13:41.992229  482230 host.go:66] Checking if "ha-579393" exists ...
	I1014 20:13:41.992685  482230 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1014 20:13:41.992785  482230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:13:42.011347  482230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:13:42.113616  482230 ssh_runner.go:195] Run: systemctl --version
	I1014 20:13:42.120593  482230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 20:13:42.133588  482230 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 20:13:42.191747  482230 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-14 20:13:42.18133005 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	E1014 20:13:42.192233  482230 status.go:458] kubeconfig endpoint: get endpoint: "ha-579393" does not appear in /home/jenkins/minikube-integration/21409-413763/kubeconfig
	I1014 20:13:42.192264  482230 api_server.go:166] Checking apiserver status ...
	I1014 20:13:42.192309  482230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1014 20:13:42.203159  482230 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1014 20:13:42.203186  482230 status.go:463] ha-579393 apiserver status = Running (err=<nil>)
	I1014 20:13:42.203197  482230 status.go:176] ha-579393 status: &{Name:ha-579393 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1014 20:13:42.209562  417373 retry.go:31] will retry after 24.106500149s: exit status 6
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-579393 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-579393 status --alsologtostderr -v 5: exit status 6 (303.884231ms)

                                                
                                                
-- stdout --
	ha-579393
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 20:14:06.363501  482455 out.go:360] Setting OutFile to fd 1 ...
	I1014 20:14:06.363642  482455 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:14:06.363654  482455 out.go:374] Setting ErrFile to fd 2...
	I1014 20:14:06.363661  482455 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:14:06.363917  482455 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-413763/.minikube/bin
	I1014 20:14:06.364099  482455 out.go:368] Setting JSON to false
	I1014 20:14:06.364130  482455 mustload.go:65] Loading cluster: ha-579393
	I1014 20:14:06.364183  482455 notify.go:220] Checking for updates...
	I1014 20:14:06.364480  482455 config.go:182] Loaded profile config "ha-579393": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:14:06.364495  482455 status.go:174] checking status of ha-579393 ...
	I1014 20:14:06.364927  482455 cli_runner.go:164] Run: docker container inspect ha-579393 --format={{.State.Status}}
	I1014 20:14:06.383609  482455 status.go:371] ha-579393 host status = "Running" (err=<nil>)
	I1014 20:14:06.383634  482455 host.go:66] Checking if "ha-579393" exists ...
	I1014 20:14:06.383936  482455 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-579393
	I1014 20:14:06.401787  482455 host.go:66] Checking if "ha-579393" exists ...
	I1014 20:14:06.402121  482455 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1014 20:14:06.402171  482455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:14:06.420577  482455 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:14:06.521422  482455 ssh_runner.go:195] Run: systemctl --version
	I1014 20:14:06.528324  482455 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 20:14:06.540829  482455 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 20:14:06.604828  482455 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-14 20:14:06.594059956 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	E1014 20:14:06.605294  482455 status.go:458] kubeconfig endpoint: get endpoint: "ha-579393" does not appear in /home/jenkins/minikube-integration/21409-413763/kubeconfig
	I1014 20:14:06.605329  482455 api_server.go:166] Checking apiserver status ...
	I1014 20:14:06.605365  482455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1014 20:14:06.615932  482455 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1014 20:14:06.615958  482455 status.go:463] ha-579393 apiserver status = Running (err=<nil>)
	I1014 20:14:06.615971  482455 status.go:176] ha-579393 status: &{Name:ha-579393 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:434: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-579393 status --alsologtostderr -v 5" : exit status 6
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-579393
helpers_test.go:243: (dbg) docker inspect ha-579393:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2",
	        "Created": "2025-10-14T20:03:22.416166993Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 471115,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-14T20:03:22.453114626Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2/hostname",
	        "HostsPath": "/var/lib/docker/containers/e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2/hosts",
	        "LogPath": "/var/lib/docker/containers/e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2/e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2-json.log",
	        "Name": "/ha-579393",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-579393:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-579393",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2",
	                "LowerDir": "/var/lib/docker/overlay2/f2862f7f555e5163c5fb738e59e646f87e5b6f6dc38621bbd5cf8a3ccf2dc40d-init/diff:/var/lib/docker/overlay2/51cca15559205c73df9571f03495ed8a29f085405673af5fef2c1ba7c695d8af/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f2862f7f555e5163c5fb738e59e646f87e5b6f6dc38621bbd5cf8a3ccf2dc40d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f2862f7f555e5163c5fb738e59e646f87e5b6f6dc38621bbd5cf8a3ccf2dc40d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f2862f7f555e5163c5fb738e59e646f87e5b6f6dc38621bbd5cf8a3ccf2dc40d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-579393",
	                "Source": "/var/lib/docker/volumes/ha-579393/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-579393",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-579393",
	                "name.minikube.sigs.k8s.io": "ha-579393",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a1c2b52fae2ff440ed705eb97dd81ba6bb6415972c195c1ca3bec92d8e7f50f0",
	            "SandboxKey": "/var/run/docker/netns/a1c2b52fae2f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32903"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32904"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32907"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32905"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32906"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-579393": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "92:ce:80:cd:a9:2b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d686e445c44d566d468791372c74214f3cfc87adaf2fbedd68822ff7d3ce8955",
	                    "EndpointID": "9e96e7d478fc0073b7c8e78f8945763db207596a9030627a1780b04c90be2b93",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-579393",
	                        "e999454f4483"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-579393 -n ha-579393
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-579393 -n ha-579393: exit status 6 (307.345827ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1014 20:14:06.932824  482598 status.go:458] kubeconfig endpoint: get endpoint: "ha-579393" does not appear in /home/jenkins/minikube-integration/21409-413763/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-579393 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                      ARGS                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-744288 image build -t localhost/my-image:functional-744288 testdata/build --alsologtostderr          │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ update-context │ functional-744288 update-context --alsologtostderr -v=2                                                         │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ image          │ functional-744288 image ls                                                                                      │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ delete         │ -p functional-744288                                                                                            │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 20:03 UTC │ 14 Oct 25 20:03 UTC │
	│ start          │ ha-579393 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:03 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml                                                │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- rollout status deployment/busybox                                                          │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:12 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:12 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:12 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- exec  -- nslookup kubernetes.io                                                            │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- exec  -- nslookup kubernetes.default                                                       │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                                     │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ node           │ ha-579393 node add --alsologtostderr -v 5                                                                       │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ node           │ ha-579393 node stop m02 --alsologtostderr -v 5                                                                  │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ node           │ ha-579393 node start m02 --alsologtostderr -v 5                                                                 │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/14 20:03:17
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1014 20:03:17.125360  470544 out.go:360] Setting OutFile to fd 1 ...
	I1014 20:03:17.125666  470544 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:03:17.125678  470544 out.go:374] Setting ErrFile to fd 2...
	I1014 20:03:17.125685  470544 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:03:17.125940  470544 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-413763/.minikube/bin
	I1014 20:03:17.126490  470544 out.go:368] Setting JSON to false
	I1014 20:03:17.127467  470544 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":9943,"bootTime":1760462254,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1014 20:03:17.127588  470544 start.go:141] virtualization: kvm guest
	I1014 20:03:17.129767  470544 out.go:179] * [ha-579393] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1014 20:03:17.131241  470544 out.go:179]   - MINIKUBE_LOCATION=21409
	I1014 20:03:17.131264  470544 notify.go:220] Checking for updates...
	I1014 20:03:17.134306  470544 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 20:03:17.135806  470544 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-413763/kubeconfig
	I1014 20:03:17.137119  470544 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-413763/.minikube
	I1014 20:03:17.138379  470544 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1014 20:03:17.140082  470544 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 20:03:17.141662  470544 driver.go:421] Setting default libvirt URI to qemu:///system
	I1014 20:03:17.165916  470544 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1014 20:03:17.166098  470544 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 20:03:17.229548  470544 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-14 20:03:17.218250431 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1014 20:03:17.229650  470544 docker.go:318] overlay module found
	I1014 20:03:17.231449  470544 out.go:179] * Using the docker driver based on user configuration
	I1014 20:03:17.232741  470544 start.go:305] selected driver: docker
	I1014 20:03:17.232773  470544 start.go:925] validating driver "docker" against <nil>
	I1014 20:03:17.232790  470544 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 20:03:17.233313  470544 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 20:03:17.295257  470544 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-14 20:03:17.284941769 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1014 20:03:17.295445  470544 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1014 20:03:17.295657  470544 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 20:03:17.297506  470544 out.go:179] * Using Docker driver with root privileges
	I1014 20:03:17.298873  470544 cni.go:84] Creating CNI manager for ""
	I1014 20:03:17.298932  470544 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1014 20:03:17.298947  470544 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1014 20:03:17.299040  470544 start.go:349] cluster config:
	{Name:ha-579393 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-579393 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPaus
eInterval:1m0s}
	I1014 20:03:17.300487  470544 out.go:179] * Starting "ha-579393" primary control-plane node in "ha-579393" cluster
	I1014 20:03:17.301710  470544 cache.go:123] Beginning downloading kic base image for docker with crio
	I1014 20:03:17.302965  470544 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1014 20:03:17.304134  470544 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 20:03:17.304173  470544 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-413763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1014 20:03:17.304183  470544 cache.go:58] Caching tarball of preloaded images
	I1014 20:03:17.304233  470544 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1014 20:03:17.304269  470544 preload.go:233] Found /home/jenkins/minikube-integration/21409-413763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1014 20:03:17.304279  470544 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1014 20:03:17.304557  470544 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/config.json ...
	I1014 20:03:17.304580  470544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/config.json: {Name:mk533f81ade9d1a5f526dccc10d22b964ab1abab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:03:17.326336  470544 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1014 20:03:17.326357  470544 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1014 20:03:17.326374  470544 cache.go:232] Successfully downloaded all kic artifacts
	I1014 20:03:17.326399  470544 start.go:360] acquireMachinesLock for ha-579393: {Name:mk1f05a584fb3418ecbc78031a467c0e06c7a892 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 20:03:17.327173  470544 start.go:364] duration metric: took 757.56µs to acquireMachinesLock for "ha-579393"
	I1014 20:03:17.327207  470544 start.go:93] Provisioning new machine with config: &{Name:ha-579393 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-579393 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 20:03:17.327266  470544 start.go:125] createHost starting for "" (driver="docker")
	I1014 20:03:17.329132  470544 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1014 20:03:17.329332  470544 start.go:159] libmachine.API.Create for "ha-579393" (driver="docker")
	I1014 20:03:17.329358  470544 client.go:168] LocalClient.Create starting
	I1014 20:03:17.329426  470544 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem
	I1014 20:03:17.329458  470544 main.go:141] libmachine: Decoding PEM data...
	I1014 20:03:17.329469  470544 main.go:141] libmachine: Parsing certificate...
	I1014 20:03:17.329531  470544 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem
	I1014 20:03:17.329556  470544 main.go:141] libmachine: Decoding PEM data...
	I1014 20:03:17.329563  470544 main.go:141] libmachine: Parsing certificate...
	I1014 20:03:17.329904  470544 cli_runner.go:164] Run: docker network inspect ha-579393 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1014 20:03:17.347467  470544 cli_runner.go:211] docker network inspect ha-579393 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1014 20:03:17.347535  470544 network_create.go:284] running [docker network inspect ha-579393] to gather additional debugging logs...
	I1014 20:03:17.347555  470544 cli_runner.go:164] Run: docker network inspect ha-579393
	W1014 20:03:17.364018  470544 cli_runner.go:211] docker network inspect ha-579393 returned with exit code 1
	I1014 20:03:17.364049  470544 network_create.go:287] error running [docker network inspect ha-579393]: docker network inspect ha-579393: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-579393 not found
	I1014 20:03:17.364062  470544 network_create.go:289] output of [docker network inspect ha-579393]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-579393 not found
	
	** /stderr **
	I1014 20:03:17.364179  470544 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1014 20:03:17.381335  470544 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001946000}
	I1014 20:03:17.381374  470544 network_create.go:124] attempt to create docker network ha-579393 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1014 20:03:17.381422  470544 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-579393 ha-579393
	I1014 20:03:17.438306  470544 network_create.go:108] docker network ha-579393 192.168.49.0/24 created
	I1014 20:03:17.438342  470544 kic.go:121] calculated static IP "192.168.49.2" for the "ha-579393" container
	I1014 20:03:17.438422  470544 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1014 20:03:17.455388  470544 cli_runner.go:164] Run: docker volume create ha-579393 --label name.minikube.sigs.k8s.io=ha-579393 --label created_by.minikube.sigs.k8s.io=true
	I1014 20:03:17.474494  470544 oci.go:103] Successfully created a docker volume ha-579393
	I1014 20:03:17.474585  470544 cli_runner.go:164] Run: docker run --rm --name ha-579393-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-579393 --entrypoint /usr/bin/test -v ha-579393:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1014 20:03:17.868197  470544 oci.go:107] Successfully prepared a docker volume ha-579393
	I1014 20:03:17.868264  470544 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 20:03:17.868291  470544 kic.go:194] Starting extracting preloaded images to volume ...
	I1014 20:03:17.868380  470544 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21409-413763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-579393:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	I1014 20:03:22.341626  470544 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21409-413763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-579393:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (4.473193247s)
	I1014 20:03:22.341663  470544 kic.go:203] duration metric: took 4.47336734s to extract preloaded images to volume ...
	W1014 20:03:22.341815  470544 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1014 20:03:22.341863  470544 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1014 20:03:22.341916  470544 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1014 20:03:22.400050  470544 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-579393 --name ha-579393 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-579393 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-579393 --network ha-579393 --ip 192.168.49.2 --volume ha-579393:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1014 20:03:22.677726  470544 cli_runner.go:164] Run: docker container inspect ha-579393 --format={{.State.Running}}
	I1014 20:03:22.696026  470544 cli_runner.go:164] Run: docker container inspect ha-579393 --format={{.State.Status}}
	I1014 20:03:22.715378  470544 cli_runner.go:164] Run: docker exec ha-579393 stat /var/lib/dpkg/alternatives/iptables
	I1014 20:03:22.762223  470544 oci.go:144] the created container "ha-579393" has a running status.
	I1014 20:03:22.762255  470544 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa...
	I1014 20:03:22.820780  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1014 20:03:22.820832  470544 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1014 20:03:22.850515  470544 cli_runner.go:164] Run: docker container inspect ha-579393 --format={{.State.Status}}
	I1014 20:03:22.870190  470544 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1014 20:03:22.870210  470544 kic_runner.go:114] Args: [docker exec --privileged ha-579393 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1014 20:03:22.912447  470544 cli_runner.go:164] Run: docker container inspect ha-579393 --format={{.State.Status}}
	I1014 20:03:22.934356  470544 machine.go:93] provisionDockerMachine start ...
	I1014 20:03:22.934472  470544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:03:22.954394  470544 main.go:141] libmachine: Using SSH client type: native
	I1014 20:03:22.954768  470544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32903 <nil> <nil>}
	I1014 20:03:22.954796  470544 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 20:03:22.955439  470544 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50642->127.0.0.1:32903: read: connection reset by peer
	I1014 20:03:26.104260  470544 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-579393
	
	I1014 20:03:26.104298  470544 ubuntu.go:182] provisioning hostname "ha-579393"
	I1014 20:03:26.104379  470544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:03:26.122921  470544 main.go:141] libmachine: Using SSH client type: native
	I1014 20:03:26.123167  470544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32903 <nil> <nil>}
	I1014 20:03:26.123185  470544 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-579393 && echo "ha-579393" | sudo tee /etc/hostname
	I1014 20:03:26.281180  470544 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-579393
	
	I1014 20:03:26.281286  470544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:03:26.299367  470544 main.go:141] libmachine: Using SSH client type: native
	I1014 20:03:26.299579  470544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32903 <nil> <nil>}
	I1014 20:03:26.299596  470544 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-579393' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-579393/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-579393' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 20:03:26.445909  470544 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 20:03:26.445941  470544 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-413763/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-413763/.minikube}
	I1014 20:03:26.445960  470544 ubuntu.go:190] setting up certificates
	I1014 20:03:26.445974  470544 provision.go:84] configureAuth start
	I1014 20:03:26.446042  470544 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-579393
	I1014 20:03:26.463014  470544 provision.go:143] copyHostCerts
	I1014 20:03:26.463059  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem
	I1014 20:03:26.463090  470544 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem, removing ...
	I1014 20:03:26.463099  470544 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem
	I1014 20:03:26.463169  470544 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem (1078 bytes)
	I1014 20:03:26.463255  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem
	I1014 20:03:26.463272  470544 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem, removing ...
	I1014 20:03:26.463279  470544 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem
	I1014 20:03:26.463304  470544 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem (1123 bytes)
	I1014 20:03:26.463350  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem
	I1014 20:03:26.463367  470544 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem, removing ...
	I1014 20:03:26.463373  470544 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem
	I1014 20:03:26.463396  470544 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem (1675 bytes)
	I1014 20:03:26.463447  470544 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca-key.pem org=jenkins.ha-579393 san=[127.0.0.1 192.168.49.2 ha-579393 localhost minikube]
	I1014 20:03:26.617910  470544 provision.go:177] copyRemoteCerts
	I1014 20:03:26.617976  470544 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 20:03:26.618022  470544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:03:26.636120  470544 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:03:26.739380  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1014 20:03:26.739452  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 20:03:26.759232  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1014 20:03:26.759293  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1014 20:03:26.778271  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1014 20:03:26.778338  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1014 20:03:26.796388  470544 provision.go:87] duration metric: took 350.39932ms to configureAuth
	I1014 20:03:26.796420  470544 ubuntu.go:206] setting minikube options for container-runtime
	I1014 20:03:26.796596  470544 config.go:182] Loaded profile config "ha-579393": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:03:26.796705  470544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:03:26.816035  470544 main.go:141] libmachine: Using SSH client type: native
	I1014 20:03:26.816243  470544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32903 <nil> <nil>}
	I1014 20:03:26.816259  470544 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 20:03:27.082126  470544 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 20:03:27.082156  470544 machine.go:96] duration metric: took 4.147772563s to provisionDockerMachine
	I1014 20:03:27.082171  470544 client.go:171] duration metric: took 9.752806403s to LocalClient.Create
	I1014 20:03:27.082197  470544 start.go:167] duration metric: took 9.752866506s to libmachine.API.Create "ha-579393"
	I1014 20:03:27.082205  470544 start.go:293] postStartSetup for "ha-579393" (driver="docker")
	I1014 20:03:27.082215  470544 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 20:03:27.082274  470544 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 20:03:27.082316  470544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:03:27.101460  470544 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:03:27.208078  470544 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 20:03:27.212053  470544 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1014 20:03:27.212086  470544 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1014 20:03:27.212100  470544 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-413763/.minikube/addons for local assets ...
	I1014 20:03:27.212182  470544 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-413763/.minikube/files for local assets ...
	I1014 20:03:27.212277  470544 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem -> 4173732.pem in /etc/ssl/certs
	I1014 20:03:27.212288  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem -> /etc/ssl/certs/4173732.pem
	I1014 20:03:27.212396  470544 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 20:03:27.220472  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem --> /etc/ssl/certs/4173732.pem (1708 bytes)
	I1014 20:03:27.241576  470544 start.go:296] duration metric: took 159.355524ms for postStartSetup
	I1014 20:03:27.241976  470544 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-579393
	I1014 20:03:27.259468  470544 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/config.json ...
	I1014 20:03:27.259849  470544 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1014 20:03:27.259907  470544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:03:27.277799  470544 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:03:27.378323  470544 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1014 20:03:27.383519  470544 start.go:128] duration metric: took 10.056234444s to createHost
	I1014 20:03:27.383548  470544 start.go:83] releasing machines lock for "ha-579393", held for 10.056356237s
	I1014 20:03:27.383629  470544 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-579393
	I1014 20:03:27.401699  470544 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 20:03:27.401709  470544 ssh_runner.go:195] Run: cat /version.json
	I1014 20:03:27.401815  470544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:03:27.401838  470544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:03:27.420176  470544 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:03:27.421057  470544 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:03:27.574708  470544 ssh_runner.go:195] Run: systemctl --version
	I1014 20:03:27.581776  470544 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 20:03:27.618049  470544 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 20:03:27.622981  470544 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 20:03:27.623059  470544 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 20:03:27.650696  470544 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1014 20:03:27.650726  470544 start.go:495] detecting cgroup driver to use...
	I1014 20:03:27.650795  470544 detect.go:190] detected "systemd" cgroup driver on host os
	I1014 20:03:27.650860  470544 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 20:03:27.668397  470544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 20:03:27.681391  470544 docker.go:218] disabling cri-docker service (if available) ...
	I1014 20:03:27.681446  470544 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 20:03:27.698246  470544 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 20:03:27.716479  470544 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 20:03:27.798818  470544 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 20:03:27.884317  470544 docker.go:234] disabling docker service ...
	I1014 20:03:27.884384  470544 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 20:03:27.905126  470544 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 20:03:27.918827  470544 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 20:03:28.002081  470544 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 20:03:28.084842  470544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 20:03:28.098220  470544 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 20:03:28.113305  470544 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1014 20:03:28.113364  470544 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:03:28.124477  470544 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1014 20:03:28.124559  470544 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:03:28.134261  470544 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:03:28.144071  470544 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:03:28.154359  470544 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 20:03:28.163636  470544 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:03:28.173644  470544 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:03:28.188326  470544 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:03:28.198228  470544 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 20:03:28.206234  470544 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 20:03:28.214019  470544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 20:03:28.295010  470544 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 20:03:28.401206  470544 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 20:03:28.401272  470544 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 20:03:28.405522  470544 start.go:563] Will wait 60s for crictl version
	I1014 20:03:28.405585  470544 ssh_runner.go:195] Run: which crictl
	I1014 20:03:28.409373  470544 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1014 20:03:28.435266  470544 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1014 20:03:28.435335  470544 ssh_runner.go:195] Run: crio --version
	I1014 20:03:28.465834  470544 ssh_runner.go:195] Run: crio --version
	I1014 20:03:28.497274  470544 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1014 20:03:28.498593  470544 cli_runner.go:164] Run: docker network inspect ha-579393 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1014 20:03:28.517029  470544 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1014 20:03:28.521498  470544 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 20:03:28.532817  470544 kubeadm.go:883] updating cluster {Name:ha-579393 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-579393 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 20:03:28.532940  470544 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 20:03:28.532992  470544 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 20:03:28.565925  470544 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 20:03:28.565951  470544 crio.go:433] Images already preloaded, skipping extraction
	I1014 20:03:28.566006  470544 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 20:03:28.592978  470544 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 20:03:28.593003  470544 cache_images.go:85] Images are preloaded, skipping loading
	I1014 20:03:28.593011  470544 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1014 20:03:28.593109  470544 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-579393 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-579393 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 20:03:28.593172  470544 ssh_runner.go:195] Run: crio config
	I1014 20:03:28.638570  470544 cni.go:84] Creating CNI manager for ""
	I1014 20:03:28.638590  470544 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1014 20:03:28.638604  470544 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1014 20:03:28.638626  470544 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-579393 NodeName:ha-579393 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1014 20:03:28.638736  470544 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-579393"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 20:03:28.638778  470544 kube-vip.go:115] generating kube-vip config ...
	I1014 20:03:28.638827  470544 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1014 20:03:28.651221  470544 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1014 20:03:28.651322  470544 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1014 20:03:28.651371  470544 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1014 20:03:28.659733  470544 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 20:03:28.659825  470544 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1014 20:03:28.667977  470544 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1014 20:03:28.681172  470544 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 20:03:28.697239  470544 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1014 20:03:28.710080  470544 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I1014 20:03:28.724688  470544 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1014 20:03:28.728568  470544 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 20:03:28.738656  470544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 20:03:28.817749  470544 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 20:03:28.841528  470544 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393 for IP: 192.168.49.2
	I1014 20:03:28.841566  470544 certs.go:195] generating shared ca certs ...
	I1014 20:03:28.841587  470544 certs.go:227] acquiring lock for ca certs: {Name:mk332cc83f1e1747534fc81e896bed94f24941d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:03:28.841727  470544 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.key
	I1014 20:03:28.841805  470544 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.key
	I1014 20:03:28.841821  470544 certs.go:257] generating profile certs ...
	I1014 20:03:28.841874  470544 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/client.key
	I1014 20:03:28.841897  470544 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/client.crt with IP's: []
	I1014 20:03:29.018063  470544 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/client.crt ...
	I1014 20:03:29.018101  470544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/client.crt: {Name:mk8b90bc05b294b6c05e808012d45472c3093f67 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:03:29.018299  470544 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/client.key ...
	I1014 20:03:29.018321  470544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/client.key: {Name:mk4670db425ebf46f3bf4968573343a975480683 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:03:29.018407  470544 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key.6d4a1612
	I1014 20:03:29.018424  470544 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt.6d4a1612 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I1014 20:03:29.208082  470544 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt.6d4a1612 ...
	I1014 20:03:29.208118  470544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt.6d4a1612: {Name:mk2e48e06bd7a0fd2aa3ea9def795ac03bded956 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:03:29.208287  470544 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key.6d4a1612 ...
	I1014 20:03:29.208300  470544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key.6d4a1612: {Name:mkc6fe9b4a3330b4fa61a71beeb137e948294421 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:03:29.209199  470544 certs.go:382] copying /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt.6d4a1612 -> /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt
	I1014 20:03:29.209315  470544 certs.go:386] copying /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key.6d4a1612 -> /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key
	I1014 20:03:29.209373  470544 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.key
	I1014 20:03:29.209389  470544 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.crt with IP's: []
	I1014 20:03:29.349734  470544 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.crt ...
	I1014 20:03:29.349788  470544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.crt: {Name:mk3c38e66fa21f9bf9f031b0b611fbb1d8c4882a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:03:29.349962  470544 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.key ...
	I1014 20:03:29.349973  470544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.key: {Name:mke28e4de33c7a0d50feb0b1335c5cd9e94d81db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:03:29.350047  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1014 20:03:29.350064  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1014 20:03:29.350075  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1014 20:03:29.350087  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1014 20:03:29.350099  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1014 20:03:29.350109  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1014 20:03:29.350122  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1014 20:03:29.350132  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1014 20:03:29.350183  470544 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/417373.pem (1338 bytes)
	W1014 20:03:29.350228  470544 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-413763/.minikube/certs/417373_empty.pem, impossibly tiny 0 bytes
	I1014 20:03:29.350237  470544 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca-key.pem (1675 bytes)
	I1014 20:03:29.350258  470544 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem (1078 bytes)
	I1014 20:03:29.350280  470544 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem (1123 bytes)
	I1014 20:03:29.350300  470544 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem (1675 bytes)
	I1014 20:03:29.350336  470544 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem (1708 bytes)
	I1014 20:03:29.350360  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:03:29.350373  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/417373.pem -> /usr/share/ca-certificates/417373.pem
	I1014 20:03:29.350387  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem -> /usr/share/ca-certificates/4173732.pem
	I1014 20:03:29.350927  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 20:03:29.369482  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1014 20:03:29.386797  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 20:03:29.404413  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1014 20:03:29.421955  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1014 20:03:29.439808  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1014 20:03:29.457222  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 20:03:29.475143  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1014 20:03:29.493300  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 20:03:29.513957  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/certs/417373.pem --> /usr/share/ca-certificates/417373.pem (1338 bytes)
	I1014 20:03:29.535163  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem --> /usr/share/ca-certificates/4173732.pem (1708 bytes)
	I1014 20:03:29.554358  470544 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 20:03:29.567671  470544 ssh_runner.go:195] Run: openssl version
	I1014 20:03:29.574116  470544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4173732.pem && ln -fs /usr/share/ca-certificates/4173732.pem /etc/ssl/certs/4173732.pem"
	I1014 20:03:29.582980  470544 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4173732.pem
	I1014 20:03:29.586713  470544 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 19:32 /usr/share/ca-certificates/4173732.pem
	I1014 20:03:29.586836  470544 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4173732.pem
	I1014 20:03:29.620973  470544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4173732.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 20:03:29.629990  470544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 20:03:29.638580  470544 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:03:29.642541  470544 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 19:15 /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:03:29.642595  470544 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:03:29.677097  470544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 20:03:29.687002  470544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/417373.pem && ln -fs /usr/share/ca-certificates/417373.pem /etc/ssl/certs/417373.pem"
	I1014 20:03:29.696267  470544 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/417373.pem
	I1014 20:03:29.700535  470544 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 19:32 /usr/share/ca-certificates/417373.pem
	I1014 20:03:29.700593  470544 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/417373.pem
	I1014 20:03:29.734895  470544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/417373.pem /etc/ssl/certs/51391683.0"
	I1014 20:03:29.744295  470544 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 20:03:29.748240  470544 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1014 20:03:29.748305  470544 kubeadm.go:400] StartCluster: {Name:ha-579393 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-579393 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 20:03:29.748380  470544 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 20:03:29.748448  470544 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 20:03:29.777054  470544 cri.go:89] found id: ""
	I1014 20:03:29.777134  470544 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 20:03:29.785507  470544 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 20:03:29.793651  470544 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1014 20:03:29.793711  470544 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 20:03:29.801881  470544 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 20:03:29.801906  470544 kubeadm.go:157] found existing configuration files:
	
	I1014 20:03:29.801956  470544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 20:03:29.809948  470544 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 20:03:29.810011  470544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 20:03:29.817979  470544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 20:03:29.825985  470544 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 20:03:29.826064  470544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 20:03:29.833833  470544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 20:03:29.842078  470544 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 20:03:29.842149  470544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 20:03:29.850122  470544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 20:03:29.858250  470544 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 20:03:29.858312  470544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 20:03:29.866004  470544 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1014 20:03:29.905901  470544 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1014 20:03:29.906013  470544 kubeadm.go:318] [preflight] Running pre-flight checks
	I1014 20:03:29.928412  470544 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1014 20:03:29.928498  470544 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1014 20:03:29.928541  470544 kubeadm.go:318] OS: Linux
	I1014 20:03:29.928583  470544 kubeadm.go:318] CGROUPS_CPU: enabled
	I1014 20:03:29.928652  470544 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1014 20:03:29.928730  470544 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1014 20:03:29.928805  470544 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1014 20:03:29.928849  470544 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1014 20:03:29.928892  470544 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1014 20:03:29.928935  470544 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1014 20:03:29.928973  470544 kubeadm.go:318] CGROUPS_IO: enabled
	I1014 20:03:29.989181  470544 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1014 20:03:29.989342  470544 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1014 20:03:29.989457  470544 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1014 20:03:29.997476  470544 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1014 20:03:30.000428  470544 out.go:252]   - Generating certificates and keys ...
	I1014 20:03:30.000531  470544 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1014 20:03:30.000656  470544 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1014 20:03:30.367367  470544 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1014 20:03:30.888441  470544 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1014 20:03:31.416284  470544 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1014 20:03:31.486302  470544 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1014 20:03:32.293304  470544 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1014 20:03:32.293457  470544 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [ha-579393 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1014 20:03:32.436942  470544 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1014 20:03:32.437134  470544 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [ha-579393 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1014 20:03:32.740861  470544 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1014 20:03:32.874202  470544 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1014 20:03:33.330864  470544 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1014 20:03:33.330961  470544 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1014 20:03:33.434687  470544 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1014 20:03:33.590351  470544 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1014 20:03:33.928031  470544 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1014 20:03:34.042691  470544 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1014 20:03:34.576186  470544 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1014 20:03:34.576637  470544 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1014 20:03:34.579016  470544 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1014 20:03:34.581440  470544 out.go:252]   - Booting up control plane ...
	I1014 20:03:34.581593  470544 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1014 20:03:34.581712  470544 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1014 20:03:34.581832  470544 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1014 20:03:34.595204  470544 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1014 20:03:34.595404  470544 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1014 20:03:34.601919  470544 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1014 20:03:34.602142  470544 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1014 20:03:34.602243  470544 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1014 20:03:34.699612  470544 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1014 20:03:34.699737  470544 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1014 20:03:35.200483  470544 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 500.970561ms
	I1014 20:03:35.205501  470544 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1014 20:03:35.205636  470544 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1014 20:03:35.205873  470544 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1014 20:03:35.205987  470544 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1014 20:07:35.206930  470544 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000676337s
	I1014 20:07:35.207172  470544 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000720088s
	I1014 20:07:35.207359  470544 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000743417s
	I1014 20:07:35.207371  470544 kubeadm.go:318] 
	I1014 20:07:35.207694  470544 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1014 20:07:35.208049  470544 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1014 20:07:35.208276  470544 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1014 20:07:35.208532  470544 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1014 20:07:35.208786  470544 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1014 20:07:35.209074  470544 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1014 20:07:35.209100  470544 kubeadm.go:318] 
	I1014 20:07:35.211976  470544 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1014 20:07:35.212145  470544 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1014 20:07:35.212706  470544 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1014 20:07:35.212843  470544 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	W1014 20:07:35.212972  470544 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-579393 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-579393 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 500.970561ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000676337s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000720088s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000743417s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1014 20:07:35.213050  470544 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1014 20:07:37.966951  470544 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.753873381s)
	I1014 20:07:37.967030  470544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 20:07:37.980538  470544 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1014 20:07:37.980613  470544 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 20:07:37.988822  470544 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 20:07:37.988844  470544 kubeadm.go:157] found existing configuration files:
	
	I1014 20:07:37.988897  470544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 20:07:37.996970  470544 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 20:07:37.997051  470544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 20:07:38.004797  470544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 20:07:38.012635  470544 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 20:07:38.012702  470544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 20:07:38.020175  470544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 20:07:38.028386  470544 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 20:07:38.028440  470544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 20:07:38.036154  470544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 20:07:38.044027  470544 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 20:07:38.044088  470544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 20:07:38.051422  470544 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1014 20:07:38.110505  470544 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1014 20:07:38.170186  470544 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1014 20:11:40.721242  470544 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1014 20:11:40.721491  470544 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1014 20:11:40.724650  470544 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1014 20:11:40.724789  470544 kubeadm.go:318] [preflight] Running pre-flight checks
	I1014 20:11:40.724937  470544 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1014 20:11:40.725018  470544 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1014 20:11:40.725068  470544 kubeadm.go:318] OS: Linux
	I1014 20:11:40.725125  470544 kubeadm.go:318] CGROUPS_CPU: enabled
	I1014 20:11:40.725181  470544 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1014 20:11:40.725248  470544 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1014 20:11:40.725310  470544 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1014 20:11:40.725365  470544 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1014 20:11:40.725423  470544 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1014 20:11:40.725473  470544 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1014 20:11:40.725534  470544 kubeadm.go:318] CGROUPS_IO: enabled
	I1014 20:11:40.725639  470544 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1014 20:11:40.725782  470544 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1014 20:11:40.725977  470544 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1014 20:11:40.726087  470544 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1014 20:11:40.728584  470544 out.go:252]   - Generating certificates and keys ...
	I1014 20:11:40.728668  470544 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1014 20:11:40.728723  470544 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1014 20:11:40.728820  470544 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1014 20:11:40.728895  470544 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1014 20:11:40.728974  470544 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1014 20:11:40.729051  470544 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1014 20:11:40.729150  470544 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1014 20:11:40.729214  470544 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1014 20:11:40.729282  470544 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1014 20:11:40.729340  470544 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1014 20:11:40.729378  470544 kubeadm.go:318] [certs] Using the existing "sa" key
	I1014 20:11:40.729422  470544 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1014 20:11:40.729466  470544 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1014 20:11:40.729531  470544 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1014 20:11:40.729604  470544 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1014 20:11:40.729710  470544 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1014 20:11:40.729805  470544 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1014 20:11:40.729913  470544 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1014 20:11:40.730020  470544 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1014 20:11:40.731279  470544 out.go:252]   - Booting up control plane ...
	I1014 20:11:40.731376  470544 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1014 20:11:40.731472  470544 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1014 20:11:40.731563  470544 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1014 20:11:40.731676  470544 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1014 20:11:40.731820  470544 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1014 20:11:40.731960  470544 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1014 20:11:40.732060  470544 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1014 20:11:40.732099  470544 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1014 20:11:40.732241  470544 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1014 20:11:40.732368  470544 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1014 20:11:40.732459  470544 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001170855s
	I1014 20:11:40.732550  470544 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1014 20:11:40.732649  470544 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1014 20:11:40.732789  470544 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1014 20:11:40.732875  470544 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1014 20:11:40.732961  470544 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000981646s
	I1014 20:11:40.733076  470544 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001242346s
	I1014 20:11:40.733142  470544 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001205382s
	I1014 20:11:40.733157  470544 kubeadm.go:318] 
	I1014 20:11:40.733272  470544 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1014 20:11:40.733349  470544 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1014 20:11:40.733417  470544 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1014 20:11:40.733491  470544 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1014 20:11:40.733553  470544 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1014 20:11:40.733641  470544 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1014 20:11:40.733684  470544 kubeadm.go:318] 
	I1014 20:11:40.733748  470544 kubeadm.go:402] duration metric: took 8m10.985445817s to StartCluster
	I1014 20:11:40.733824  470544 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 20:11:40.733881  470544 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 20:11:40.762474  470544 cri.go:89] found id: ""
	I1014 20:11:40.762524  470544 logs.go:282] 0 containers: []
	W1014 20:11:40.762538  470544 logs.go:284] No container was found matching "kube-apiserver"
	I1014 20:11:40.762545  470544 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 20:11:40.762602  470544 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 20:11:40.789961  470544 cri.go:89] found id: ""
	I1014 20:11:40.789989  470544 logs.go:282] 0 containers: []
	W1014 20:11:40.789999  470544 logs.go:284] No container was found matching "etcd"
	I1014 20:11:40.790007  470544 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 20:11:40.790062  470544 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 20:11:40.817095  470544 cri.go:89] found id: ""
	I1014 20:11:40.817128  470544 logs.go:282] 0 containers: []
	W1014 20:11:40.817141  470544 logs.go:284] No container was found matching "coredns"
	I1014 20:11:40.817148  470544 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 20:11:40.817206  470544 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 20:11:40.843942  470544 cri.go:89] found id: ""
	I1014 20:11:40.843974  470544 logs.go:282] 0 containers: []
	W1014 20:11:40.843984  470544 logs.go:284] No container was found matching "kube-scheduler"
	I1014 20:11:40.843991  470544 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 20:11:40.844054  470544 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 20:11:40.870262  470544 cri.go:89] found id: ""
	I1014 20:11:40.870289  470544 logs.go:282] 0 containers: []
	W1014 20:11:40.870299  470544 logs.go:284] No container was found matching "kube-proxy"
	I1014 20:11:40.870308  470544 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 20:11:40.870377  470544 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 20:11:40.896558  470544 cri.go:89] found id: ""
	I1014 20:11:40.896588  470544 logs.go:282] 0 containers: []
	W1014 20:11:40.896597  470544 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 20:11:40.896604  470544 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 20:11:40.896660  470544 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 20:11:40.923171  470544 cri.go:89] found id: ""
	I1014 20:11:40.923202  470544 logs.go:282] 0 containers: []
	W1014 20:11:40.923214  470544 logs.go:284] No container was found matching "kindnet"
	I1014 20:11:40.923225  470544 logs.go:123] Gathering logs for kubelet ...
	I1014 20:11:40.923237  470544 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 20:11:40.991897  470544 logs.go:123] Gathering logs for dmesg ...
	I1014 20:11:40.991944  470544 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 20:11:41.010371  470544 logs.go:123] Gathering logs for describe nodes ...
	I1014 20:11:41.010404  470544 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 20:11:41.071387  470544 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 20:11:41.064404    2582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:11:41.064977    2582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:11:41.066171    2582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:11:41.066648    2582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:11:41.068287    2582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 20:11:41.064404    2582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:11:41.064977    2582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:11:41.066171    2582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:11:41.066648    2582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:11:41.068287    2582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 20:11:41.071407  470544 logs.go:123] Gathering logs for CRI-O ...
	I1014 20:11:41.071419  470544 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 20:11:41.133347  470544 logs.go:123] Gathering logs for container status ...
	I1014 20:11:41.133392  470544 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1014 20:11:41.166639  470544 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001170855s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000981646s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001242346s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001205382s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1014 20:11:41.166697  470544 out.go:285] * 
	W1014 20:11:41.166793  470544 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001170855s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000981646s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001242346s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001205382s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1014 20:11:41.166813  470544 out.go:285] * 
	W1014 20:11:41.168436  470544 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 20:11:41.172303  470544 out.go:203] 
	W1014 20:11:41.173765  470544 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001170855s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000981646s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001242346s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001205382s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1014 20:11:41.173801  470544 out.go:285] * 
	I1014 20:11:41.176311  470544 out.go:203] 
	
	
	==> CRI-O <==
	Oct 14 20:14:00 ha-579393 crio[778]: time="2025-10-14T20:14:00.4756482Z" level=info msg="createCtr: removing container 64933cbb7e17cd2df63c5238dd9be6cc9625d3fc06a9c496de1c2eeb523c3e23" id=28a9fb0b-eb4f-461c-8408-349352891350 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:14:00 ha-579393 crio[778]: time="2025-10-14T20:14:00.475689472Z" level=info msg="createCtr: deleting container 64933cbb7e17cd2df63c5238dd9be6cc9625d3fc06a9c496de1c2eeb523c3e23 from storage" id=28a9fb0b-eb4f-461c-8408-349352891350 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:14:00 ha-579393 crio[778]: time="2025-10-14T20:14:00.478024063Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-ha-579393_kube-system_06869f9c490a5fffb50940ac23939d18_0" id=28a9fb0b-eb4f-461c-8408-349352891350 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:14:01 ha-579393 crio[778]: time="2025-10-14T20:14:01.453837055Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=53cbf84d-bbca-4779-b924-c8649bbeacd8 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 20:14:01 ha-579393 crio[778]: time="2025-10-14T20:14:01.454832853Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=32d6ed3a-f06e-4fd9-a65a-83c2e444babc name=/runtime.v1.ImageService/ImageStatus
	Oct 14 20:14:01 ha-579393 crio[778]: time="2025-10-14T20:14:01.455770593Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-579393/kube-controller-manager" id=22888c03-8148-4ec6-b526-0824af3a6691 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:14:01 ha-579393 crio[778]: time="2025-10-14T20:14:01.456057655Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:14:01 ha-579393 crio[778]: time="2025-10-14T20:14:01.460565955Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:14:01 ha-579393 crio[778]: time="2025-10-14T20:14:01.461043616Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:14:01 ha-579393 crio[778]: time="2025-10-14T20:14:01.478818661Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=22888c03-8148-4ec6-b526-0824af3a6691 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:14:01 ha-579393 crio[778]: time="2025-10-14T20:14:01.480117976Z" level=info msg="createCtr: deleting container ID 3ee37cbb4b10c6cc5d33080e13370e9304f30f38fbe673b5bd95b789c539ba69 from idIndex" id=22888c03-8148-4ec6-b526-0824af3a6691 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:14:01 ha-579393 crio[778]: time="2025-10-14T20:14:01.480154588Z" level=info msg="createCtr: removing container 3ee37cbb4b10c6cc5d33080e13370e9304f30f38fbe673b5bd95b789c539ba69" id=22888c03-8148-4ec6-b526-0824af3a6691 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:14:01 ha-579393 crio[778]: time="2025-10-14T20:14:01.480189476Z" level=info msg="createCtr: deleting container 3ee37cbb4b10c6cc5d33080e13370e9304f30f38fbe673b5bd95b789c539ba69 from storage" id=22888c03-8148-4ec6-b526-0824af3a6691 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:14:01 ha-579393 crio[778]: time="2025-10-14T20:14:01.482556804Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-579393_kube-system_514451ea1eb9e52e24cc36daace2ea4a_0" id=22888c03-8148-4ec6-b526-0824af3a6691 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:14:02 ha-579393 crio[778]: time="2025-10-14T20:14:02.453852352Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=625df2a9-3e6e-4a21-b865-dbf416763dc5 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 20:14:02 ha-579393 crio[778]: time="2025-10-14T20:14:02.45467616Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=3da2bcc4-18bc-425f-8d08-10711b615dfb name=/runtime.v1.ImageService/ImageStatus
	Oct 14 20:14:02 ha-579393 crio[778]: time="2025-10-14T20:14:02.455587129Z" level=info msg="Creating container: kube-system/kube-scheduler-ha-579393/kube-scheduler" id=d0b22b73-a64d-489e-b26f-93c0803cd72b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:14:02 ha-579393 crio[778]: time="2025-10-14T20:14:02.455863689Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:14:02 ha-579393 crio[778]: time="2025-10-14T20:14:02.459613135Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:14:02 ha-579393 crio[778]: time="2025-10-14T20:14:02.460099521Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:14:02 ha-579393 crio[778]: time="2025-10-14T20:14:02.477146878Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=d0b22b73-a64d-489e-b26f-93c0803cd72b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:14:02 ha-579393 crio[778]: time="2025-10-14T20:14:02.47867101Z" level=info msg="createCtr: deleting container ID ea0e9878464806d4f0be87b5f4504fae345182733689e1c469f6f21ecc88f241 from idIndex" id=d0b22b73-a64d-489e-b26f-93c0803cd72b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:14:02 ha-579393 crio[778]: time="2025-10-14T20:14:02.478716043Z" level=info msg="createCtr: removing container ea0e9878464806d4f0be87b5f4504fae345182733689e1c469f6f21ecc88f241" id=d0b22b73-a64d-489e-b26f-93c0803cd72b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:14:02 ha-579393 crio[778]: time="2025-10-14T20:14:02.478769443Z" level=info msg="createCtr: deleting container ea0e9878464806d4f0be87b5f4504fae345182733689e1c469f6f21ecc88f241 from storage" id=d0b22b73-a64d-489e-b26f-93c0803cd72b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:14:02 ha-579393 crio[778]: time="2025-10-14T20:14:02.480965835Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-ha-579393_kube-system_8c15ab9dd5834e64ae44874faddf585d_0" id=d0b22b73-a64d-489e-b26f-93c0803cd72b name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 20:14:07.544935    4616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:14:07.545503    4616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:14:07.547148    4616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:14:07.547621    4616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:14:07.549213    4616 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 91 42 a8 46 f5 08 06
	[ +33.952077] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff aa 17 60 38 7c 63 08 06
	[  +0.961658] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ba 77 81 98 05 a1 08 06
	[  +0.043334] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 8a 2f 57 55 4c d7 08 06
	[  +4.681081] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f6 ac 51 a4 b2 5a 08 06
	[Oct14 19:04] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e6 d1 de e1 85 25 08 06
	[  +0.899784] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ae f2 fe 51 3d 95 08 06
	[  +0.040749] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 82 c6 36 89 b8 4a 08 06
	[  +4.162603] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c6 62 d5 5c 0e ef 08 06
	[ +30.801371] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 06 ae e5 28 c7 39 08 06
	[  +0.817897] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ae 5e 5e 90 62 1a 08 06
	[  +0.046959] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 96 8f 1b bd b9 9f 08 06
	[Oct14 19:05] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 56 59 c3 7c db c4 08 06
	
	
	==> kernel <==
	 20:14:07 up  2:56,  0 user,  load average: 0.48, 0.18, 0.51
	Linux ha-579393 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 14 20:14:00 ha-579393 kubelet[1963]: E1014 20:14:00.478457    1963 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 14 20:14:00 ha-579393 kubelet[1963]:         container kube-apiserver start failed in pod kube-apiserver-ha-579393_kube-system(06869f9c490a5fffb50940ac23939d18): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 14 20:14:00 ha-579393 kubelet[1963]:  > logger="UnhandledError"
	Oct 14 20:14:00 ha-579393 kubelet[1963]: E1014 20:14:00.478491    1963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-ha-579393" podUID="06869f9c490a5fffb50940ac23939d18"
	Oct 14 20:14:01 ha-579393 kubelet[1963]: E1014 20:14:01.453329    1963 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-579393\" not found" node="ha-579393"
	Oct 14 20:14:01 ha-579393 kubelet[1963]: E1014 20:14:01.482975    1963 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 14 20:14:01 ha-579393 kubelet[1963]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 14 20:14:01 ha-579393 kubelet[1963]:  > podSandboxID="aaede030549f8967d5aa233537563148ce2bbd3af1fde92787bd937fe5f1c93d"
	Oct 14 20:14:01 ha-579393 kubelet[1963]: E1014 20:14:01.483094    1963 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 14 20:14:01 ha-579393 kubelet[1963]:         container kube-controller-manager start failed in pod kube-controller-manager-ha-579393_kube-system(514451ea1eb9e52e24cc36daace2ea4a): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 14 20:14:01 ha-579393 kubelet[1963]:  > logger="UnhandledError"
	Oct 14 20:14:01 ha-579393 kubelet[1963]: E1014 20:14:01.483126    1963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-579393" podUID="514451ea1eb9e52e24cc36daace2ea4a"
	Oct 14 20:14:02 ha-579393 kubelet[1963]: E1014 20:14:02.453379    1963 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-579393\" not found" node="ha-579393"
	Oct 14 20:14:02 ha-579393 kubelet[1963]: E1014 20:14:02.481343    1963 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 14 20:14:02 ha-579393 kubelet[1963]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 14 20:14:02 ha-579393 kubelet[1963]:  > podSandboxID="d0a8c2929974ece2a9096ac441dce40bed26c1b0ec13fe00bf80ae77bedc2f7c"
	Oct 14 20:14:02 ha-579393 kubelet[1963]: E1014 20:14:02.481472    1963 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 14 20:14:02 ha-579393 kubelet[1963]:         container kube-scheduler start failed in pod kube-scheduler-ha-579393_kube-system(8c15ab9dd5834e64ae44874faddf585d): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 14 20:14:02 ha-579393 kubelet[1963]:  > logger="UnhandledError"
	Oct 14 20:14:02 ha-579393 kubelet[1963]: E1014 20:14:02.481533    1963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-ha-579393" podUID="8c15ab9dd5834e64ae44874faddf585d"
	Oct 14 20:14:04 ha-579393 kubelet[1963]: E1014 20:14:04.037221    1963 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://192.168.49.2:8443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError"
	Oct 14 20:14:04 ha-579393 kubelet[1963]: E1014 20:14:04.102946    1963 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-579393?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 14 20:14:04 ha-579393 kubelet[1963]: I1014 20:14:04.281704    1963 kubelet_node_status.go:75] "Attempting to register node" node="ha-579393"
	Oct 14 20:14:04 ha-579393 kubelet[1963]: E1014 20:14:04.282163    1963 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-579393"
	Oct 14 20:14:07 ha-579393 kubelet[1963]: E1014 20:14:07.439840    1963 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://192.168.49.2:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-579393 -n ha-579393
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-579393 -n ha-579393: exit status 6 (307.849464ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1014 20:14:07.945925  482925 status.go:458] kubeconfig endpoint: get endpoint: "ha-579393" does not appear in /home/jenkins/minikube-integration/21409-413763/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "ha-579393" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (54.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:305: expected profile "ha-579393" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-579393\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-579393\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nf
sshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-579393\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonIm
ages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
ha_test.go:309: expected profile "ha-579393" in json of 'profile list' to have "HAppy" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-579393\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-579393\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSShar
esRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-579393\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\
"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-linux-amd64 profile list --
output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-579393
helpers_test.go:243: (dbg) docker inspect ha-579393:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2",
	        "Created": "2025-10-14T20:03:22.416166993Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 471115,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-14T20:03:22.453114626Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2/hostname",
	        "HostsPath": "/var/lib/docker/containers/e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2/hosts",
	        "LogPath": "/var/lib/docker/containers/e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2/e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2-json.log",
	        "Name": "/ha-579393",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-579393:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-579393",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2",
	                "LowerDir": "/var/lib/docker/overlay2/f2862f7f555e5163c5fb738e59e646f87e5b6f6dc38621bbd5cf8a3ccf2dc40d-init/diff:/var/lib/docker/overlay2/51cca15559205c73df9571f03495ed8a29f085405673af5fef2c1ba7c695d8af/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f2862f7f555e5163c5fb738e59e646f87e5b6f6dc38621bbd5cf8a3ccf2dc40d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f2862f7f555e5163c5fb738e59e646f87e5b6f6dc38621bbd5cf8a3ccf2dc40d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f2862f7f555e5163c5fb738e59e646f87e5b6f6dc38621bbd5cf8a3ccf2dc40d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-579393",
	                "Source": "/var/lib/docker/volumes/ha-579393/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-579393",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-579393",
	                "name.minikube.sigs.k8s.io": "ha-579393",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a1c2b52fae2ff440ed705eb97dd81ba6bb6415972c195c1ca3bec92d8e7f50f0",
	            "SandboxKey": "/var/run/docker/netns/a1c2b52fae2f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32903"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32904"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32907"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32905"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32906"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-579393": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "92:ce:80:cd:a9:2b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d686e445c44d566d468791372c74214f3cfc87adaf2fbedd68822ff7d3ce8955",
	                    "EndpointID": "9e96e7d478fc0073b7c8e78f8945763db207596a9030627a1780b04c90be2b93",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-579393",
	                        "e999454f4483"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-579393 -n ha-579393
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-579393 -n ha-579393: exit status 6 (298.887147ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1014 20:14:08.591923  483177 status.go:458] kubeconfig endpoint: get endpoint: "ha-579393" does not appear in /home/jenkins/minikube-integration/21409-413763/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-579393 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                      ARGS                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-744288 image build -t localhost/my-image:functional-744288 testdata/build --alsologtostderr          │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ update-context │ functional-744288 update-context --alsologtostderr -v=2                                                         │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ image          │ functional-744288 image ls                                                                                      │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ delete         │ -p functional-744288                                                                                            │ functional-744288 │ jenkins │ v1.37.0 │ 14 Oct 25 20:03 UTC │ 14 Oct 25 20:03 UTC │
	│ start          │ ha-579393 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:03 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml                                                │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- rollout status deployment/busybox                                                          │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:12 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:12 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:12 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- exec  -- nslookup kubernetes.io                                                            │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- exec  -- nslookup kubernetes.default                                                       │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                                     │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ kubectl        │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ node           │ ha-579393 node add --alsologtostderr -v 5                                                                       │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ node           │ ha-579393 node stop m02 --alsologtostderr -v 5                                                                  │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ node           │ ha-579393 node start m02 --alsologtostderr -v 5                                                                 │ ha-579393         │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/14 20:03:17
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1014 20:03:17.125360  470544 out.go:360] Setting OutFile to fd 1 ...
	I1014 20:03:17.125666  470544 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:03:17.125678  470544 out.go:374] Setting ErrFile to fd 2...
	I1014 20:03:17.125685  470544 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:03:17.125940  470544 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-413763/.minikube/bin
	I1014 20:03:17.126490  470544 out.go:368] Setting JSON to false
	I1014 20:03:17.127467  470544 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":9943,"bootTime":1760462254,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1014 20:03:17.127588  470544 start.go:141] virtualization: kvm guest
	I1014 20:03:17.129767  470544 out.go:179] * [ha-579393] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1014 20:03:17.131241  470544 out.go:179]   - MINIKUBE_LOCATION=21409
	I1014 20:03:17.131264  470544 notify.go:220] Checking for updates...
	I1014 20:03:17.134306  470544 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 20:03:17.135806  470544 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-413763/kubeconfig
	I1014 20:03:17.137119  470544 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-413763/.minikube
	I1014 20:03:17.138379  470544 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1014 20:03:17.140082  470544 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 20:03:17.141662  470544 driver.go:421] Setting default libvirt URI to qemu:///system
	I1014 20:03:17.165916  470544 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1014 20:03:17.166098  470544 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 20:03:17.229548  470544 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-14 20:03:17.218250431 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1014 20:03:17.229650  470544 docker.go:318] overlay module found
	I1014 20:03:17.231449  470544 out.go:179] * Using the docker driver based on user configuration
	I1014 20:03:17.232741  470544 start.go:305] selected driver: docker
	I1014 20:03:17.232773  470544 start.go:925] validating driver "docker" against <nil>
	I1014 20:03:17.232790  470544 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 20:03:17.233313  470544 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 20:03:17.295257  470544 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-14 20:03:17.284941769 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1014 20:03:17.295445  470544 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1014 20:03:17.295657  470544 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 20:03:17.297506  470544 out.go:179] * Using Docker driver with root privileges
	I1014 20:03:17.298873  470544 cni.go:84] Creating CNI manager for ""
	I1014 20:03:17.298932  470544 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1014 20:03:17.298947  470544 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1014 20:03:17.299040  470544 start.go:349] cluster config:
	{Name:ha-579393 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-579393 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPaus
eInterval:1m0s}
	I1014 20:03:17.300487  470544 out.go:179] * Starting "ha-579393" primary control-plane node in "ha-579393" cluster
	I1014 20:03:17.301710  470544 cache.go:123] Beginning downloading kic base image for docker with crio
	I1014 20:03:17.302965  470544 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1014 20:03:17.304134  470544 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 20:03:17.304173  470544 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-413763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1014 20:03:17.304183  470544 cache.go:58] Caching tarball of preloaded images
	I1014 20:03:17.304233  470544 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1014 20:03:17.304269  470544 preload.go:233] Found /home/jenkins/minikube-integration/21409-413763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1014 20:03:17.304279  470544 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1014 20:03:17.304557  470544 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/config.json ...
	I1014 20:03:17.304580  470544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/config.json: {Name:mk533f81ade9d1a5f526dccc10d22b964ab1abab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:03:17.326336  470544 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1014 20:03:17.326357  470544 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1014 20:03:17.326374  470544 cache.go:232] Successfully downloaded all kic artifacts
	I1014 20:03:17.326399  470544 start.go:360] acquireMachinesLock for ha-579393: {Name:mk1f05a584fb3418ecbc78031a467c0e06c7a892 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 20:03:17.327173  470544 start.go:364] duration metric: took 757.56µs to acquireMachinesLock for "ha-579393"
	I1014 20:03:17.327207  470544 start.go:93] Provisioning new machine with config: &{Name:ha-579393 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-579393 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 20:03:17.327266  470544 start.go:125] createHost starting for "" (driver="docker")
	I1014 20:03:17.329132  470544 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1014 20:03:17.329332  470544 start.go:159] libmachine.API.Create for "ha-579393" (driver="docker")
	I1014 20:03:17.329358  470544 client.go:168] LocalClient.Create starting
	I1014 20:03:17.329426  470544 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem
	I1014 20:03:17.329458  470544 main.go:141] libmachine: Decoding PEM data...
	I1014 20:03:17.329469  470544 main.go:141] libmachine: Parsing certificate...
	I1014 20:03:17.329531  470544 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem
	I1014 20:03:17.329556  470544 main.go:141] libmachine: Decoding PEM data...
	I1014 20:03:17.329563  470544 main.go:141] libmachine: Parsing certificate...
	I1014 20:03:17.329904  470544 cli_runner.go:164] Run: docker network inspect ha-579393 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1014 20:03:17.347467  470544 cli_runner.go:211] docker network inspect ha-579393 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1014 20:03:17.347535  470544 network_create.go:284] running [docker network inspect ha-579393] to gather additional debugging logs...
	I1014 20:03:17.347555  470544 cli_runner.go:164] Run: docker network inspect ha-579393
	W1014 20:03:17.364018  470544 cli_runner.go:211] docker network inspect ha-579393 returned with exit code 1
	I1014 20:03:17.364049  470544 network_create.go:287] error running [docker network inspect ha-579393]: docker network inspect ha-579393: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-579393 not found
	I1014 20:03:17.364062  470544 network_create.go:289] output of [docker network inspect ha-579393]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-579393 not found
	
	** /stderr **
	I1014 20:03:17.364179  470544 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1014 20:03:17.381335  470544 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001946000}
	I1014 20:03:17.381374  470544 network_create.go:124] attempt to create docker network ha-579393 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1014 20:03:17.381422  470544 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-579393 ha-579393
	I1014 20:03:17.438306  470544 network_create.go:108] docker network ha-579393 192.168.49.0/24 created
	I1014 20:03:17.438342  470544 kic.go:121] calculated static IP "192.168.49.2" for the "ha-579393" container
	I1014 20:03:17.438422  470544 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1014 20:03:17.455388  470544 cli_runner.go:164] Run: docker volume create ha-579393 --label name.minikube.sigs.k8s.io=ha-579393 --label created_by.minikube.sigs.k8s.io=true
	I1014 20:03:17.474494  470544 oci.go:103] Successfully created a docker volume ha-579393
	I1014 20:03:17.474585  470544 cli_runner.go:164] Run: docker run --rm --name ha-579393-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-579393 --entrypoint /usr/bin/test -v ha-579393:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1014 20:03:17.868197  470544 oci.go:107] Successfully prepared a docker volume ha-579393
	I1014 20:03:17.868264  470544 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 20:03:17.868291  470544 kic.go:194] Starting extracting preloaded images to volume ...
	I1014 20:03:17.868380  470544 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21409-413763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-579393:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	I1014 20:03:22.341626  470544 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21409-413763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-579393:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (4.473193247s)
	I1014 20:03:22.341663  470544 kic.go:203] duration metric: took 4.47336734s to extract preloaded images to volume ...
	W1014 20:03:22.341815  470544 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1014 20:03:22.341863  470544 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1014 20:03:22.341916  470544 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1014 20:03:22.400050  470544 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-579393 --name ha-579393 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-579393 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-579393 --network ha-579393 --ip 192.168.49.2 --volume ha-579393:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1014 20:03:22.677726  470544 cli_runner.go:164] Run: docker container inspect ha-579393 --format={{.State.Running}}
	I1014 20:03:22.696026  470544 cli_runner.go:164] Run: docker container inspect ha-579393 --format={{.State.Status}}
	I1014 20:03:22.715378  470544 cli_runner.go:164] Run: docker exec ha-579393 stat /var/lib/dpkg/alternatives/iptables
	I1014 20:03:22.762223  470544 oci.go:144] the created container "ha-579393" has a running status.
	I1014 20:03:22.762255  470544 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa...
	I1014 20:03:22.820780  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1014 20:03:22.820832  470544 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1014 20:03:22.850515  470544 cli_runner.go:164] Run: docker container inspect ha-579393 --format={{.State.Status}}
	I1014 20:03:22.870190  470544 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1014 20:03:22.870210  470544 kic_runner.go:114] Args: [docker exec --privileged ha-579393 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1014 20:03:22.912447  470544 cli_runner.go:164] Run: docker container inspect ha-579393 --format={{.State.Status}}
	I1014 20:03:22.934356  470544 machine.go:93] provisionDockerMachine start ...
	I1014 20:03:22.934472  470544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:03:22.954394  470544 main.go:141] libmachine: Using SSH client type: native
	I1014 20:03:22.954768  470544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32903 <nil> <nil>}
	I1014 20:03:22.954796  470544 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 20:03:22.955439  470544 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50642->127.0.0.1:32903: read: connection reset by peer
	I1014 20:03:26.104260  470544 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-579393
	
	I1014 20:03:26.104298  470544 ubuntu.go:182] provisioning hostname "ha-579393"
	I1014 20:03:26.104379  470544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:03:26.122921  470544 main.go:141] libmachine: Using SSH client type: native
	I1014 20:03:26.123167  470544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32903 <nil> <nil>}
	I1014 20:03:26.123185  470544 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-579393 && echo "ha-579393" | sudo tee /etc/hostname
	I1014 20:03:26.281180  470544 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-579393
	
	I1014 20:03:26.281286  470544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:03:26.299367  470544 main.go:141] libmachine: Using SSH client type: native
	I1014 20:03:26.299579  470544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32903 <nil> <nil>}
	I1014 20:03:26.299596  470544 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-579393' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-579393/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-579393' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 20:03:26.445909  470544 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 20:03:26.445941  470544 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-413763/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-413763/.minikube}
	I1014 20:03:26.445960  470544 ubuntu.go:190] setting up certificates
	I1014 20:03:26.445974  470544 provision.go:84] configureAuth start
	I1014 20:03:26.446042  470544 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-579393
	I1014 20:03:26.463014  470544 provision.go:143] copyHostCerts
	I1014 20:03:26.463059  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem
	I1014 20:03:26.463090  470544 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem, removing ...
	I1014 20:03:26.463099  470544 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem
	I1014 20:03:26.463169  470544 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem (1078 bytes)
	I1014 20:03:26.463255  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem
	I1014 20:03:26.463272  470544 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem, removing ...
	I1014 20:03:26.463279  470544 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem
	I1014 20:03:26.463304  470544 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem (1123 bytes)
	I1014 20:03:26.463350  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem
	I1014 20:03:26.463367  470544 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem, removing ...
	I1014 20:03:26.463373  470544 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem
	I1014 20:03:26.463396  470544 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem (1675 bytes)
	I1014 20:03:26.463447  470544 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca-key.pem org=jenkins.ha-579393 san=[127.0.0.1 192.168.49.2 ha-579393 localhost minikube]
	I1014 20:03:26.617910  470544 provision.go:177] copyRemoteCerts
	I1014 20:03:26.617976  470544 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 20:03:26.618022  470544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:03:26.636120  470544 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:03:26.739380  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1014 20:03:26.739452  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 20:03:26.759232  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1014 20:03:26.759293  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1014 20:03:26.778271  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1014 20:03:26.778338  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1014 20:03:26.796388  470544 provision.go:87] duration metric: took 350.39932ms to configureAuth
	I1014 20:03:26.796420  470544 ubuntu.go:206] setting minikube options for container-runtime
	I1014 20:03:26.796596  470544 config.go:182] Loaded profile config "ha-579393": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:03:26.796705  470544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:03:26.816035  470544 main.go:141] libmachine: Using SSH client type: native
	I1014 20:03:26.816243  470544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32903 <nil> <nil>}
	I1014 20:03:26.816259  470544 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 20:03:27.082126  470544 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 20:03:27.082156  470544 machine.go:96] duration metric: took 4.147772563s to provisionDockerMachine
	I1014 20:03:27.082171  470544 client.go:171] duration metric: took 9.752806403s to LocalClient.Create
	I1014 20:03:27.082197  470544 start.go:167] duration metric: took 9.752866506s to libmachine.API.Create "ha-579393"
	I1014 20:03:27.082205  470544 start.go:293] postStartSetup for "ha-579393" (driver="docker")
	I1014 20:03:27.082215  470544 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 20:03:27.082274  470544 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 20:03:27.082316  470544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:03:27.101460  470544 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:03:27.208078  470544 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 20:03:27.212053  470544 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1014 20:03:27.212086  470544 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1014 20:03:27.212100  470544 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-413763/.minikube/addons for local assets ...
	I1014 20:03:27.212182  470544 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-413763/.minikube/files for local assets ...
	I1014 20:03:27.212277  470544 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem -> 4173732.pem in /etc/ssl/certs
	I1014 20:03:27.212288  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem -> /etc/ssl/certs/4173732.pem
	I1014 20:03:27.212396  470544 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 20:03:27.220472  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem --> /etc/ssl/certs/4173732.pem (1708 bytes)
	I1014 20:03:27.241576  470544 start.go:296] duration metric: took 159.355524ms for postStartSetup
	I1014 20:03:27.241976  470544 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-579393
	I1014 20:03:27.259468  470544 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/config.json ...
	I1014 20:03:27.259849  470544 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1014 20:03:27.259907  470544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:03:27.277799  470544 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:03:27.378323  470544 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1014 20:03:27.383519  470544 start.go:128] duration metric: took 10.056234444s to createHost
	I1014 20:03:27.383548  470544 start.go:83] releasing machines lock for "ha-579393", held for 10.056356237s
	I1014 20:03:27.383629  470544 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-579393
	I1014 20:03:27.401699  470544 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 20:03:27.401709  470544 ssh_runner.go:195] Run: cat /version.json
	I1014 20:03:27.401815  470544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:03:27.401838  470544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:03:27.420176  470544 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:03:27.421057  470544 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:03:27.574708  470544 ssh_runner.go:195] Run: systemctl --version
	I1014 20:03:27.581776  470544 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 20:03:27.618049  470544 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 20:03:27.622981  470544 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 20:03:27.623059  470544 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 20:03:27.650696  470544 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1014 20:03:27.650726  470544 start.go:495] detecting cgroup driver to use...
	I1014 20:03:27.650795  470544 detect.go:190] detected "systemd" cgroup driver on host os
	I1014 20:03:27.650860  470544 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 20:03:27.668397  470544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 20:03:27.681391  470544 docker.go:218] disabling cri-docker service (if available) ...
	I1014 20:03:27.681446  470544 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 20:03:27.698246  470544 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 20:03:27.716479  470544 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 20:03:27.798818  470544 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 20:03:27.884317  470544 docker.go:234] disabling docker service ...
	I1014 20:03:27.884384  470544 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 20:03:27.905126  470544 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 20:03:27.918827  470544 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 20:03:28.002081  470544 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 20:03:28.084842  470544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 20:03:28.098220  470544 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 20:03:28.113305  470544 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1014 20:03:28.113364  470544 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:03:28.124477  470544 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1014 20:03:28.124559  470544 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:03:28.134261  470544 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:03:28.144071  470544 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:03:28.154359  470544 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 20:03:28.163636  470544 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:03:28.173644  470544 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:03:28.188326  470544 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:03:28.198228  470544 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 20:03:28.206234  470544 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 20:03:28.214019  470544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 20:03:28.295010  470544 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 20:03:28.401206  470544 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 20:03:28.401272  470544 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 20:03:28.405522  470544 start.go:563] Will wait 60s for crictl version
	I1014 20:03:28.405585  470544 ssh_runner.go:195] Run: which crictl
	I1014 20:03:28.409373  470544 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1014 20:03:28.435266  470544 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1014 20:03:28.435335  470544 ssh_runner.go:195] Run: crio --version
	I1014 20:03:28.465834  470544 ssh_runner.go:195] Run: crio --version
	I1014 20:03:28.497274  470544 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1014 20:03:28.498593  470544 cli_runner.go:164] Run: docker network inspect ha-579393 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1014 20:03:28.517029  470544 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1014 20:03:28.521498  470544 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 20:03:28.532817  470544 kubeadm.go:883] updating cluster {Name:ha-579393 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-579393 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 20:03:28.532940  470544 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 20:03:28.532992  470544 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 20:03:28.565925  470544 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 20:03:28.565951  470544 crio.go:433] Images already preloaded, skipping extraction
	I1014 20:03:28.566006  470544 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 20:03:28.592978  470544 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 20:03:28.593003  470544 cache_images.go:85] Images are preloaded, skipping loading
	I1014 20:03:28.593011  470544 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1014 20:03:28.593109  470544 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-579393 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-579393 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 20:03:28.593172  470544 ssh_runner.go:195] Run: crio config
	I1014 20:03:28.638570  470544 cni.go:84] Creating CNI manager for ""
	I1014 20:03:28.638590  470544 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1014 20:03:28.638604  470544 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1014 20:03:28.638626  470544 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-579393 NodeName:ha-579393 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1014 20:03:28.638736  470544 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-579393"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 20:03:28.638778  470544 kube-vip.go:115] generating kube-vip config ...
	I1014 20:03:28.638827  470544 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1014 20:03:28.651221  470544 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1014 20:03:28.651322  470544 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1014 20:03:28.651371  470544 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1014 20:03:28.659733  470544 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 20:03:28.659825  470544 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1014 20:03:28.667977  470544 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1014 20:03:28.681172  470544 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 20:03:28.697239  470544 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1014 20:03:28.710080  470544 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I1014 20:03:28.724688  470544 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1014 20:03:28.728568  470544 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 20:03:28.738656  470544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 20:03:28.817749  470544 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 20:03:28.841528  470544 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393 for IP: 192.168.49.2
	I1014 20:03:28.841566  470544 certs.go:195] generating shared ca certs ...
	I1014 20:03:28.841587  470544 certs.go:227] acquiring lock for ca certs: {Name:mk332cc83f1e1747534fc81e896bed94f24941d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:03:28.841727  470544 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.key
	I1014 20:03:28.841805  470544 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.key
	I1014 20:03:28.841821  470544 certs.go:257] generating profile certs ...
	I1014 20:03:28.841874  470544 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/client.key
	I1014 20:03:28.841897  470544 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/client.crt with IP's: []
	I1014 20:03:29.018063  470544 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/client.crt ...
	I1014 20:03:29.018101  470544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/client.crt: {Name:mk8b90bc05b294b6c05e808012d45472c3093f67 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:03:29.018299  470544 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/client.key ...
	I1014 20:03:29.018321  470544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/client.key: {Name:mk4670db425ebf46f3bf4968573343a975480683 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:03:29.018407  470544 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key.6d4a1612
	I1014 20:03:29.018424  470544 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt.6d4a1612 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I1014 20:03:29.208082  470544 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt.6d4a1612 ...
	I1014 20:03:29.208118  470544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt.6d4a1612: {Name:mk2e48e06bd7a0fd2aa3ea9def795ac03bded956 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:03:29.208287  470544 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key.6d4a1612 ...
	I1014 20:03:29.208300  470544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key.6d4a1612: {Name:mkc6fe9b4a3330b4fa61a71beeb137e948294421 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:03:29.209199  470544 certs.go:382] copying /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt.6d4a1612 -> /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt
	I1014 20:03:29.209315  470544 certs.go:386] copying /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key.6d4a1612 -> /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key
	I1014 20:03:29.209373  470544 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.key
	I1014 20:03:29.209389  470544 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.crt with IP's: []
	I1014 20:03:29.349734  470544 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.crt ...
	I1014 20:03:29.349788  470544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.crt: {Name:mk3c38e66fa21f9bf9f031b0b611fbb1d8c4882a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:03:29.349962  470544 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.key ...
	I1014 20:03:29.349973  470544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.key: {Name:mke28e4de33c7a0d50feb0b1335c5cd9e94d81db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:03:29.350047  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1014 20:03:29.350064  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1014 20:03:29.350075  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1014 20:03:29.350087  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1014 20:03:29.350099  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1014 20:03:29.350109  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1014 20:03:29.350122  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1014 20:03:29.350132  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1014 20:03:29.350183  470544 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/417373.pem (1338 bytes)
	W1014 20:03:29.350228  470544 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-413763/.minikube/certs/417373_empty.pem, impossibly tiny 0 bytes
	I1014 20:03:29.350237  470544 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca-key.pem (1675 bytes)
	I1014 20:03:29.350258  470544 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem (1078 bytes)
	I1014 20:03:29.350280  470544 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem (1123 bytes)
	I1014 20:03:29.350300  470544 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem (1675 bytes)
	I1014 20:03:29.350336  470544 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem (1708 bytes)
	I1014 20:03:29.350360  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:03:29.350373  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/417373.pem -> /usr/share/ca-certificates/417373.pem
	I1014 20:03:29.350387  470544 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem -> /usr/share/ca-certificates/4173732.pem
	I1014 20:03:29.350927  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 20:03:29.369482  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1014 20:03:29.386797  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 20:03:29.404413  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1014 20:03:29.421955  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1014 20:03:29.439808  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1014 20:03:29.457222  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 20:03:29.475143  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1014 20:03:29.493300  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 20:03:29.513957  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/certs/417373.pem --> /usr/share/ca-certificates/417373.pem (1338 bytes)
	I1014 20:03:29.535163  470544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem --> /usr/share/ca-certificates/4173732.pem (1708 bytes)
	I1014 20:03:29.554358  470544 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 20:03:29.567671  470544 ssh_runner.go:195] Run: openssl version
	I1014 20:03:29.574116  470544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4173732.pem && ln -fs /usr/share/ca-certificates/4173732.pem /etc/ssl/certs/4173732.pem"
	I1014 20:03:29.582980  470544 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4173732.pem
	I1014 20:03:29.586713  470544 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 19:32 /usr/share/ca-certificates/4173732.pem
	I1014 20:03:29.586836  470544 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4173732.pem
	I1014 20:03:29.620973  470544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4173732.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 20:03:29.629990  470544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 20:03:29.638580  470544 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:03:29.642541  470544 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 19:15 /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:03:29.642595  470544 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:03:29.677097  470544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 20:03:29.687002  470544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/417373.pem && ln -fs /usr/share/ca-certificates/417373.pem /etc/ssl/certs/417373.pem"
	I1014 20:03:29.696267  470544 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/417373.pem
	I1014 20:03:29.700535  470544 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 19:32 /usr/share/ca-certificates/417373.pem
	I1014 20:03:29.700593  470544 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/417373.pem
	I1014 20:03:29.734895  470544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/417373.pem /etc/ssl/certs/51391683.0"
	I1014 20:03:29.744295  470544 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 20:03:29.748240  470544 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1014 20:03:29.748305  470544 kubeadm.go:400] StartCluster: {Name:ha-579393 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-579393 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 20:03:29.748380  470544 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 20:03:29.748448  470544 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 20:03:29.777054  470544 cri.go:89] found id: ""
	I1014 20:03:29.777134  470544 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 20:03:29.785507  470544 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 20:03:29.793651  470544 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1014 20:03:29.793711  470544 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 20:03:29.801881  470544 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 20:03:29.801906  470544 kubeadm.go:157] found existing configuration files:
	
	I1014 20:03:29.801956  470544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 20:03:29.809948  470544 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 20:03:29.810011  470544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 20:03:29.817979  470544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 20:03:29.825985  470544 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 20:03:29.826064  470544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 20:03:29.833833  470544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 20:03:29.842078  470544 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 20:03:29.842149  470544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 20:03:29.850122  470544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 20:03:29.858250  470544 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 20:03:29.858312  470544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 20:03:29.866004  470544 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1014 20:03:29.905901  470544 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1014 20:03:29.906013  470544 kubeadm.go:318] [preflight] Running pre-flight checks
	I1014 20:03:29.928412  470544 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1014 20:03:29.928498  470544 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1014 20:03:29.928541  470544 kubeadm.go:318] OS: Linux
	I1014 20:03:29.928583  470544 kubeadm.go:318] CGROUPS_CPU: enabled
	I1014 20:03:29.928652  470544 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1014 20:03:29.928730  470544 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1014 20:03:29.928805  470544 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1014 20:03:29.928849  470544 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1014 20:03:29.928892  470544 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1014 20:03:29.928935  470544 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1014 20:03:29.928973  470544 kubeadm.go:318] CGROUPS_IO: enabled
	I1014 20:03:29.989181  470544 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1014 20:03:29.989342  470544 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1014 20:03:29.989457  470544 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1014 20:03:29.997476  470544 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1014 20:03:30.000428  470544 out.go:252]   - Generating certificates and keys ...
	I1014 20:03:30.000531  470544 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1014 20:03:30.000656  470544 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1014 20:03:30.367367  470544 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1014 20:03:30.888441  470544 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1014 20:03:31.416284  470544 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1014 20:03:31.486302  470544 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1014 20:03:32.293304  470544 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1014 20:03:32.293457  470544 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [ha-579393 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1014 20:03:32.436942  470544 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1014 20:03:32.437134  470544 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [ha-579393 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1014 20:03:32.740861  470544 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1014 20:03:32.874202  470544 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1014 20:03:33.330864  470544 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1014 20:03:33.330961  470544 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1014 20:03:33.434687  470544 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1014 20:03:33.590351  470544 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1014 20:03:33.928031  470544 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1014 20:03:34.042691  470544 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1014 20:03:34.576186  470544 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1014 20:03:34.576637  470544 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1014 20:03:34.579016  470544 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1014 20:03:34.581440  470544 out.go:252]   - Booting up control plane ...
	I1014 20:03:34.581593  470544 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1014 20:03:34.581712  470544 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1014 20:03:34.581832  470544 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1014 20:03:34.595204  470544 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1014 20:03:34.595404  470544 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1014 20:03:34.601919  470544 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1014 20:03:34.602142  470544 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1014 20:03:34.602243  470544 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1014 20:03:34.699612  470544 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1014 20:03:34.699737  470544 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1014 20:03:35.200483  470544 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 500.970561ms
	I1014 20:03:35.205501  470544 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1014 20:03:35.205636  470544 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1014 20:03:35.205873  470544 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1014 20:03:35.205987  470544 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1014 20:07:35.206930  470544 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000676337s
	I1014 20:07:35.207172  470544 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000720088s
	I1014 20:07:35.207359  470544 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000743417s
	I1014 20:07:35.207371  470544 kubeadm.go:318] 
	I1014 20:07:35.207694  470544 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1014 20:07:35.208049  470544 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1014 20:07:35.208276  470544 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1014 20:07:35.208532  470544 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1014 20:07:35.208786  470544 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1014 20:07:35.209074  470544 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1014 20:07:35.209100  470544 kubeadm.go:318] 
	I1014 20:07:35.211976  470544 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1014 20:07:35.212145  470544 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1014 20:07:35.212706  470544 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1014 20:07:35.212843  470544 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	W1014 20:07:35.212972  470544 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-579393 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-579393 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 500.970561ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000676337s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000720088s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000743417s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1014 20:07:35.213050  470544 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1014 20:07:37.966951  470544 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.753873381s)
	I1014 20:07:37.967030  470544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 20:07:37.980538  470544 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1014 20:07:37.980613  470544 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 20:07:37.988822  470544 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 20:07:37.988844  470544 kubeadm.go:157] found existing configuration files:
	
	I1014 20:07:37.988897  470544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 20:07:37.996970  470544 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 20:07:37.997051  470544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 20:07:38.004797  470544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 20:07:38.012635  470544 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 20:07:38.012702  470544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 20:07:38.020175  470544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 20:07:38.028386  470544 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 20:07:38.028440  470544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 20:07:38.036154  470544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 20:07:38.044027  470544 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 20:07:38.044088  470544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 20:07:38.051422  470544 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1014 20:07:38.110505  470544 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1014 20:07:38.170186  470544 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1014 20:11:40.721242  470544 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1014 20:11:40.721491  470544 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1014 20:11:40.724650  470544 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1014 20:11:40.724789  470544 kubeadm.go:318] [preflight] Running pre-flight checks
	I1014 20:11:40.724937  470544 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1014 20:11:40.725018  470544 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1014 20:11:40.725068  470544 kubeadm.go:318] OS: Linux
	I1014 20:11:40.725125  470544 kubeadm.go:318] CGROUPS_CPU: enabled
	I1014 20:11:40.725181  470544 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1014 20:11:40.725248  470544 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1014 20:11:40.725310  470544 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1014 20:11:40.725365  470544 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1014 20:11:40.725423  470544 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1014 20:11:40.725473  470544 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1014 20:11:40.725534  470544 kubeadm.go:318] CGROUPS_IO: enabled
	I1014 20:11:40.725639  470544 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1014 20:11:40.725782  470544 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1014 20:11:40.725977  470544 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1014 20:11:40.726087  470544 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1014 20:11:40.728584  470544 out.go:252]   - Generating certificates and keys ...
	I1014 20:11:40.728668  470544 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1014 20:11:40.728723  470544 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1014 20:11:40.728820  470544 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1014 20:11:40.728895  470544 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1014 20:11:40.728974  470544 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1014 20:11:40.729051  470544 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1014 20:11:40.729150  470544 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1014 20:11:40.729214  470544 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1014 20:11:40.729282  470544 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1014 20:11:40.729340  470544 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1014 20:11:40.729378  470544 kubeadm.go:318] [certs] Using the existing "sa" key
	I1014 20:11:40.729422  470544 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1014 20:11:40.729466  470544 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1014 20:11:40.729531  470544 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1014 20:11:40.729604  470544 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1014 20:11:40.729710  470544 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1014 20:11:40.729805  470544 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1014 20:11:40.729913  470544 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1014 20:11:40.730020  470544 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1014 20:11:40.731279  470544 out.go:252]   - Booting up control plane ...
	I1014 20:11:40.731376  470544 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1014 20:11:40.731472  470544 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1014 20:11:40.731563  470544 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1014 20:11:40.731676  470544 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1014 20:11:40.731820  470544 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1014 20:11:40.731960  470544 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1014 20:11:40.732060  470544 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1014 20:11:40.732099  470544 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1014 20:11:40.732241  470544 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1014 20:11:40.732368  470544 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1014 20:11:40.732459  470544 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001170855s
	I1014 20:11:40.732550  470544 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1014 20:11:40.732649  470544 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1014 20:11:40.732789  470544 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1014 20:11:40.732875  470544 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1014 20:11:40.732961  470544 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000981646s
	I1014 20:11:40.733076  470544 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001242346s
	I1014 20:11:40.733142  470544 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001205382s
	I1014 20:11:40.733157  470544 kubeadm.go:318] 
	I1014 20:11:40.733272  470544 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1014 20:11:40.733349  470544 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1014 20:11:40.733417  470544 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1014 20:11:40.733491  470544 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1014 20:11:40.733553  470544 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1014 20:11:40.733641  470544 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1014 20:11:40.733684  470544 kubeadm.go:318] 
	I1014 20:11:40.733748  470544 kubeadm.go:402] duration metric: took 8m10.985445817s to StartCluster
	I1014 20:11:40.733824  470544 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 20:11:40.733881  470544 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 20:11:40.762474  470544 cri.go:89] found id: ""
	I1014 20:11:40.762524  470544 logs.go:282] 0 containers: []
	W1014 20:11:40.762538  470544 logs.go:284] No container was found matching "kube-apiserver"
	I1014 20:11:40.762545  470544 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 20:11:40.762602  470544 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 20:11:40.789961  470544 cri.go:89] found id: ""
	I1014 20:11:40.789989  470544 logs.go:282] 0 containers: []
	W1014 20:11:40.789999  470544 logs.go:284] No container was found matching "etcd"
	I1014 20:11:40.790007  470544 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 20:11:40.790062  470544 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 20:11:40.817095  470544 cri.go:89] found id: ""
	I1014 20:11:40.817128  470544 logs.go:282] 0 containers: []
	W1014 20:11:40.817141  470544 logs.go:284] No container was found matching "coredns"
	I1014 20:11:40.817148  470544 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 20:11:40.817206  470544 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 20:11:40.843942  470544 cri.go:89] found id: ""
	I1014 20:11:40.843974  470544 logs.go:282] 0 containers: []
	W1014 20:11:40.843984  470544 logs.go:284] No container was found matching "kube-scheduler"
	I1014 20:11:40.843991  470544 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 20:11:40.844054  470544 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 20:11:40.870262  470544 cri.go:89] found id: ""
	I1014 20:11:40.870289  470544 logs.go:282] 0 containers: []
	W1014 20:11:40.870299  470544 logs.go:284] No container was found matching "kube-proxy"
	I1014 20:11:40.870308  470544 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 20:11:40.870377  470544 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 20:11:40.896558  470544 cri.go:89] found id: ""
	I1014 20:11:40.896588  470544 logs.go:282] 0 containers: []
	W1014 20:11:40.896597  470544 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 20:11:40.896604  470544 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 20:11:40.896660  470544 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 20:11:40.923171  470544 cri.go:89] found id: ""
	I1014 20:11:40.923202  470544 logs.go:282] 0 containers: []
	W1014 20:11:40.923214  470544 logs.go:284] No container was found matching "kindnet"
	I1014 20:11:40.923225  470544 logs.go:123] Gathering logs for kubelet ...
	I1014 20:11:40.923237  470544 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 20:11:40.991897  470544 logs.go:123] Gathering logs for dmesg ...
	I1014 20:11:40.991944  470544 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 20:11:41.010371  470544 logs.go:123] Gathering logs for describe nodes ...
	I1014 20:11:41.010404  470544 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 20:11:41.071387  470544 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 20:11:41.064404    2582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:11:41.064977    2582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:11:41.066171    2582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:11:41.066648    2582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:11:41.068287    2582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 20:11:41.064404    2582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:11:41.064977    2582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:11:41.066171    2582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:11:41.066648    2582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:11:41.068287    2582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 20:11:41.071407  470544 logs.go:123] Gathering logs for CRI-O ...
	I1014 20:11:41.071419  470544 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 20:11:41.133347  470544 logs.go:123] Gathering logs for container status ...
	I1014 20:11:41.133392  470544 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1014 20:11:41.166639  470544 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001170855s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000981646s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001242346s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001205382s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1014 20:11:41.166697  470544 out.go:285] * 
	W1014 20:11:41.166793  470544 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001170855s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000981646s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001242346s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001205382s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1014 20:11:41.166813  470544 out.go:285] * 
	W1014 20:11:41.168436  470544 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 20:11:41.172303  470544 out.go:203] 
	W1014 20:11:41.173765  470544 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001170855s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000981646s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001242346s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001205382s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1014 20:11:41.173801  470544 out.go:285] * 
	I1014 20:11:41.176311  470544 out.go:203] 
	
	
	==> CRI-O <==
	Oct 14 20:14:01 ha-579393 crio[778]: time="2025-10-14T20:14:01.480154588Z" level=info msg="createCtr: removing container 3ee37cbb4b10c6cc5d33080e13370e9304f30f38fbe673b5bd95b789c539ba69" id=22888c03-8148-4ec6-b526-0824af3a6691 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:14:01 ha-579393 crio[778]: time="2025-10-14T20:14:01.480189476Z" level=info msg="createCtr: deleting container 3ee37cbb4b10c6cc5d33080e13370e9304f30f38fbe673b5bd95b789c539ba69 from storage" id=22888c03-8148-4ec6-b526-0824af3a6691 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:14:01 ha-579393 crio[778]: time="2025-10-14T20:14:01.482556804Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-579393_kube-system_514451ea1eb9e52e24cc36daace2ea4a_0" id=22888c03-8148-4ec6-b526-0824af3a6691 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:14:02 ha-579393 crio[778]: time="2025-10-14T20:14:02.453852352Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=625df2a9-3e6e-4a21-b865-dbf416763dc5 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 20:14:02 ha-579393 crio[778]: time="2025-10-14T20:14:02.45467616Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=3da2bcc4-18bc-425f-8d08-10711b615dfb name=/runtime.v1.ImageService/ImageStatus
	Oct 14 20:14:02 ha-579393 crio[778]: time="2025-10-14T20:14:02.455587129Z" level=info msg="Creating container: kube-system/kube-scheduler-ha-579393/kube-scheduler" id=d0b22b73-a64d-489e-b26f-93c0803cd72b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:14:02 ha-579393 crio[778]: time="2025-10-14T20:14:02.455863689Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:14:02 ha-579393 crio[778]: time="2025-10-14T20:14:02.459613135Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:14:02 ha-579393 crio[778]: time="2025-10-14T20:14:02.460099521Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:14:02 ha-579393 crio[778]: time="2025-10-14T20:14:02.477146878Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=d0b22b73-a64d-489e-b26f-93c0803cd72b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:14:02 ha-579393 crio[778]: time="2025-10-14T20:14:02.47867101Z" level=info msg="createCtr: deleting container ID ea0e9878464806d4f0be87b5f4504fae345182733689e1c469f6f21ecc88f241 from idIndex" id=d0b22b73-a64d-489e-b26f-93c0803cd72b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:14:02 ha-579393 crio[778]: time="2025-10-14T20:14:02.478716043Z" level=info msg="createCtr: removing container ea0e9878464806d4f0be87b5f4504fae345182733689e1c469f6f21ecc88f241" id=d0b22b73-a64d-489e-b26f-93c0803cd72b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:14:02 ha-579393 crio[778]: time="2025-10-14T20:14:02.478769443Z" level=info msg="createCtr: deleting container ea0e9878464806d4f0be87b5f4504fae345182733689e1c469f6f21ecc88f241 from storage" id=d0b22b73-a64d-489e-b26f-93c0803cd72b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:14:02 ha-579393 crio[778]: time="2025-10-14T20:14:02.480965835Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-ha-579393_kube-system_8c15ab9dd5834e64ae44874faddf585d_0" id=d0b22b73-a64d-489e-b26f-93c0803cd72b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:14:08 ha-579393 crio[778]: time="2025-10-14T20:14:08.453255688Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=06afd312-2297-412e-afc9-5d34726e5584 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 20:14:08 ha-579393 crio[778]: time="2025-10-14T20:14:08.454305068Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=8aa79a1b-b016-4286-a688-8471746b482b name=/runtime.v1.ImageService/ImageStatus
	Oct 14 20:14:08 ha-579393 crio[778]: time="2025-10-14T20:14:08.455298542Z" level=info msg="Creating container: kube-system/etcd-ha-579393/etcd" id=3da63682-982c-4e95-9219-5062957fc140 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:14:08 ha-579393 crio[778]: time="2025-10-14T20:14:08.455581546Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:14:08 ha-579393 crio[778]: time="2025-10-14T20:14:08.464017534Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:14:08 ha-579393 crio[778]: time="2025-10-14T20:14:08.464650939Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:14:08 ha-579393 crio[778]: time="2025-10-14T20:14:08.484852444Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=3da63682-982c-4e95-9219-5062957fc140 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:14:08 ha-579393 crio[778]: time="2025-10-14T20:14:08.486415307Z" level=info msg="createCtr: deleting container ID ccb80d0ede4edb9b14e5263be5b46e2b99d5dedd7856f273fbb5a21203337b1a from idIndex" id=3da63682-982c-4e95-9219-5062957fc140 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:14:08 ha-579393 crio[778]: time="2025-10-14T20:14:08.486470467Z" level=info msg="createCtr: removing container ccb80d0ede4edb9b14e5263be5b46e2b99d5dedd7856f273fbb5a21203337b1a" id=3da63682-982c-4e95-9219-5062957fc140 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:14:08 ha-579393 crio[778]: time="2025-10-14T20:14:08.48651413Z" level=info msg="createCtr: deleting container ccb80d0ede4edb9b14e5263be5b46e2b99d5dedd7856f273fbb5a21203337b1a from storage" id=3da63682-982c-4e95-9219-5062957fc140 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:14:08 ha-579393 crio[778]: time="2025-10-14T20:14:08.48907148Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-579393_kube-system_949fee8892a6b2444a3aa0dec92a7837_0" id=3da63682-982c-4e95-9219-5062957fc140 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 20:14:09.193686    4793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:14:09.194716    4793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:14:09.195434    4793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:14:09.197118    4793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:14:09.197705    4793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 91 42 a8 46 f5 08 06
	[ +33.952077] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff aa 17 60 38 7c 63 08 06
	[  +0.961658] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ba 77 81 98 05 a1 08 06
	[  +0.043334] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 8a 2f 57 55 4c d7 08 06
	[  +4.681081] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f6 ac 51 a4 b2 5a 08 06
	[Oct14 19:04] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e6 d1 de e1 85 25 08 06
	[  +0.899784] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ae f2 fe 51 3d 95 08 06
	[  +0.040749] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 82 c6 36 89 b8 4a 08 06
	[  +4.162603] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c6 62 d5 5c 0e ef 08 06
	[ +30.801371] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 06 ae e5 28 c7 39 08 06
	[  +0.817897] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ae 5e 5e 90 62 1a 08 06
	[  +0.046959] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 96 8f 1b bd b9 9f 08 06
	[Oct14 19:05] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 56 59 c3 7c db c4 08 06
	
	
	==> kernel <==
	 20:14:09 up  2:56,  0 user,  load average: 0.48, 0.18, 0.51
	Linux ha-579393 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 14 20:14:01 ha-579393 kubelet[1963]:  > logger="UnhandledError"
	Oct 14 20:14:01 ha-579393 kubelet[1963]: E1014 20:14:01.483126    1963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-579393" podUID="514451ea1eb9e52e24cc36daace2ea4a"
	Oct 14 20:14:02 ha-579393 kubelet[1963]: E1014 20:14:02.453379    1963 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-579393\" not found" node="ha-579393"
	Oct 14 20:14:02 ha-579393 kubelet[1963]: E1014 20:14:02.481343    1963 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 14 20:14:02 ha-579393 kubelet[1963]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 14 20:14:02 ha-579393 kubelet[1963]:  > podSandboxID="d0a8c2929974ece2a9096ac441dce40bed26c1b0ec13fe00bf80ae77bedc2f7c"
	Oct 14 20:14:02 ha-579393 kubelet[1963]: E1014 20:14:02.481472    1963 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 14 20:14:02 ha-579393 kubelet[1963]:         container kube-scheduler start failed in pod kube-scheduler-ha-579393_kube-system(8c15ab9dd5834e64ae44874faddf585d): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 14 20:14:02 ha-579393 kubelet[1963]:  > logger="UnhandledError"
	Oct 14 20:14:02 ha-579393 kubelet[1963]: E1014 20:14:02.481533    1963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-ha-579393" podUID="8c15ab9dd5834e64ae44874faddf585d"
	Oct 14 20:14:04 ha-579393 kubelet[1963]: E1014 20:14:04.037221    1963 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://192.168.49.2:8443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError"
	Oct 14 20:14:04 ha-579393 kubelet[1963]: E1014 20:14:04.102946    1963 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-579393?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 14 20:14:04 ha-579393 kubelet[1963]: I1014 20:14:04.281704    1963 kubelet_node_status.go:75] "Attempting to register node" node="ha-579393"
	Oct 14 20:14:04 ha-579393 kubelet[1963]: E1014 20:14:04.282163    1963 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-579393"
	Oct 14 20:14:07 ha-579393 kubelet[1963]: E1014 20:14:07.439840    1963 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://192.168.49.2:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass"
	Oct 14 20:14:07 ha-579393 kubelet[1963]: E1014 20:14:07.627408    1963 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	Oct 14 20:14:08 ha-579393 kubelet[1963]: E1014 20:14:08.452718    1963 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-579393\" not found" node="ha-579393"
	Oct 14 20:14:08 ha-579393 kubelet[1963]: E1014 20:14:08.489399    1963 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 14 20:14:08 ha-579393 kubelet[1963]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 14 20:14:08 ha-579393 kubelet[1963]:  > podSandboxID="41ac2f349da00920582806a729366af02d901203fe089532947fdee2d8b61fa0"
	Oct 14 20:14:08 ha-579393 kubelet[1963]: E1014 20:14:08.489498    1963 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 14 20:14:08 ha-579393 kubelet[1963]:         container etcd start failed in pod etcd-ha-579393_kube-system(949fee8892a6b2444a3aa0dec92a7837): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 14 20:14:08 ha-579393 kubelet[1963]:  > logger="UnhandledError"
	Oct 14 20:14:08 ha-579393 kubelet[1963]: E1014 20:14:08.489531    1963 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-ha-579393" podUID="949fee8892a6b2444a3aa0dec92a7837"
	Oct 14 20:14:08 ha-579393 kubelet[1963]: E1014 20:14:08.648671    1963 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-579393.186e746019ae0b94  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-579393,UID:ha-579393,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-579393 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-579393,},FirstTimestamp:2025-10-14 20:07:40.444961684 +0000 UTC m=+0.732526825,LastTimestamp:2025-10-14 20:07:40.444961684 +0000 UTC m=+0.732526825,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-579393,}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-579393 -n ha-579393
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-579393 -n ha-579393: exit status 6 (303.094601ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1014 20:14:09.588628  483511 status.go:458] kubeconfig endpoint: get endpoint: "ha-579393" does not appear in /home/jenkins/minikube-integration/21409-413763/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "ha-579393" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (370.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-579393 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-579393 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-579393 stop --alsologtostderr -v 5: (1.219254675s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-579393 start --wait true --alsologtostderr -v 5
E1014 20:14:12.798290  417373 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:19:12.798023  417373 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-579393 start --wait true --alsologtostderr -v 5: exit status 80 (6m7.617304201s)

                                                
                                                
-- stdout --
	* [ha-579393] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21409
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21409-413763/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-413763/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "ha-579393" primary control-plane node in "ha-579393" cluster
	* Pulling base image v0.0.48-1759745255-21703 ...
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 20:14:10.920500  483855 out.go:360] Setting OutFile to fd 1 ...
	I1014 20:14:10.920744  483855 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:14:10.920765  483855 out.go:374] Setting ErrFile to fd 2...
	I1014 20:14:10.920770  483855 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:14:10.920950  483855 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-413763/.minikube/bin
	I1014 20:14:10.921400  483855 out.go:368] Setting JSON to false
	I1014 20:14:10.922307  483855 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":10597,"bootTime":1760462254,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1014 20:14:10.922423  483855 start.go:141] virtualization: kvm guest
	I1014 20:14:10.924678  483855 out.go:179] * [ha-579393] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1014 20:14:10.925922  483855 out.go:179]   - MINIKUBE_LOCATION=21409
	I1014 20:14:10.925932  483855 notify.go:220] Checking for updates...
	I1014 20:14:10.928150  483855 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 20:14:10.929578  483855 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-413763/kubeconfig
	I1014 20:14:10.931110  483855 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-413763/.minikube
	I1014 20:14:10.932372  483855 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1014 20:14:10.933593  483855 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 20:14:10.935251  483855 config.go:182] Loaded profile config "ha-579393": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:14:10.935376  483855 driver.go:421] Setting default libvirt URI to qemu:///system
	I1014 20:14:10.960161  483855 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1014 20:14:10.960301  483855 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 20:14:11.020772  483855 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-14 20:14:11.009250952 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1014 20:14:11.020894  483855 docker.go:318] overlay module found
	I1014 20:14:11.022935  483855 out.go:179] * Using the docker driver based on existing profile
	I1014 20:14:11.024198  483855 start.go:305] selected driver: docker
	I1014 20:14:11.024214  483855 start.go:925] validating driver "docker" against &{Name:ha-579393 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-579393 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 20:14:11.024304  483855 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 20:14:11.024438  483855 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 20:14:11.086893  483855 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-14 20:14:11.076411866 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1014 20:14:11.087678  483855 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 20:14:11.087721  483855 cni.go:84] Creating CNI manager for ""
	I1014 20:14:11.087800  483855 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1014 20:14:11.087868  483855 start.go:349] cluster config:
	{Name:ha-579393 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-579393 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GP
Us: AutoPauseInterval:1m0s}
	I1014 20:14:11.090005  483855 out.go:179] * Starting "ha-579393" primary control-plane node in "ha-579393" cluster
	I1014 20:14:11.091314  483855 cache.go:123] Beginning downloading kic base image for docker with crio
	I1014 20:14:11.092803  483855 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1014 20:14:11.094111  483855 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 20:14:11.094148  483855 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-413763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1014 20:14:11.094156  483855 cache.go:58] Caching tarball of preloaded images
	I1014 20:14:11.094218  483855 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1014 20:14:11.094241  483855 preload.go:233] Found /home/jenkins/minikube-integration/21409-413763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1014 20:14:11.094277  483855 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1014 20:14:11.094382  483855 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/config.json ...
	I1014 20:14:11.115783  483855 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1014 20:14:11.115808  483855 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1014 20:14:11.115828  483855 cache.go:232] Successfully downloaded all kic artifacts
	I1014 20:14:11.115855  483855 start.go:360] acquireMachinesLock for ha-579393: {Name:mk1f05a584fb3418ecbc78031a467c0e06c7a892 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 20:14:11.115928  483855 start.go:364] duration metric: took 47.72µs to acquireMachinesLock for "ha-579393"
	I1014 20:14:11.115949  483855 start.go:96] Skipping create...Using existing machine configuration
	I1014 20:14:11.115957  483855 fix.go:54] fixHost starting: 
	I1014 20:14:11.116246  483855 cli_runner.go:164] Run: docker container inspect ha-579393 --format={{.State.Status}}
	I1014 20:14:11.133733  483855 fix.go:112] recreateIfNeeded on ha-579393: state=Stopped err=<nil>
	W1014 20:14:11.133784  483855 fix.go:138] unexpected machine state, will restart: <nil>
	I1014 20:14:11.135788  483855 out.go:252] * Restarting existing docker container for "ha-579393" ...
	I1014 20:14:11.135872  483855 cli_runner.go:164] Run: docker start ha-579393
	I1014 20:14:11.385558  483855 cli_runner.go:164] Run: docker container inspect ha-579393 --format={{.State.Status}}
	I1014 20:14:11.406160  483855 kic.go:430] container "ha-579393" state is running.
	I1014 20:14:11.406595  483855 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-579393
	I1014 20:14:11.427601  483855 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/config.json ...
	I1014 20:14:11.427957  483855 machine.go:93] provisionDockerMachine start ...
	I1014 20:14:11.428045  483855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:14:11.447692  483855 main.go:141] libmachine: Using SSH client type: native
	I1014 20:14:11.447993  483855 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32908 <nil> <nil>}
	I1014 20:14:11.448015  483855 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 20:14:11.448627  483855 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:43868->127.0.0.1:32908: read: connection reset by peer
	I1014 20:14:14.598160  483855 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-579393
	
	I1014 20:14:14.598192  483855 ubuntu.go:182] provisioning hostname "ha-579393"
	I1014 20:14:14.598246  483855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:14:14.616421  483855 main.go:141] libmachine: Using SSH client type: native
	I1014 20:14:14.616660  483855 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32908 <nil> <nil>}
	I1014 20:14:14.616677  483855 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-579393 && echo "ha-579393" | sudo tee /etc/hostname
	I1014 20:14:14.772817  483855 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-579393
	
	I1014 20:14:14.772902  483855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:14:14.791583  483855 main.go:141] libmachine: Using SSH client type: native
	I1014 20:14:14.791976  483855 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32908 <nil> <nil>}
	I1014 20:14:14.792005  483855 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-579393' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-579393/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-579393' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 20:14:14.941113  483855 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 20:14:14.941153  483855 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-413763/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-413763/.minikube}
	I1014 20:14:14.941182  483855 ubuntu.go:190] setting up certificates
	I1014 20:14:14.941192  483855 provision.go:84] configureAuth start
	I1014 20:14:14.941248  483855 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-579393
	I1014 20:14:14.959540  483855 provision.go:143] copyHostCerts
	I1014 20:14:14.959581  483855 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem
	I1014 20:14:14.959610  483855 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem, removing ...
	I1014 20:14:14.959626  483855 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem
	I1014 20:14:14.959736  483855 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem (1078 bytes)
	I1014 20:14:14.959861  483855 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem
	I1014 20:14:14.959885  483855 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem, removing ...
	I1014 20:14:14.959890  483855 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem
	I1014 20:14:14.959924  483855 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem (1123 bytes)
	I1014 20:14:14.959979  483855 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem
	I1014 20:14:14.959996  483855 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem, removing ...
	I1014 20:14:14.960003  483855 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem
	I1014 20:14:14.960029  483855 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem (1675 bytes)
	I1014 20:14:14.960082  483855 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca-key.pem org=jenkins.ha-579393 san=[127.0.0.1 192.168.49.2 ha-579393 localhost minikube]
	I1014 20:14:15.029188  483855 provision.go:177] copyRemoteCerts
	I1014 20:14:15.029258  483855 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 20:14:15.029297  483855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:14:15.048357  483855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:14:15.153017  483855 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1014 20:14:15.153075  483855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 20:14:15.172076  483855 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1014 20:14:15.172147  483855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1014 20:14:15.191156  483855 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1014 20:14:15.191247  483855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1014 20:14:15.209919  483855 provision.go:87] duration metric: took 268.700795ms to configureAuth
	I1014 20:14:15.209952  483855 ubuntu.go:206] setting minikube options for container-runtime
	I1014 20:14:15.210139  483855 config.go:182] Loaded profile config "ha-579393": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:14:15.210238  483855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:14:15.228740  483855 main.go:141] libmachine: Using SSH client type: native
	I1014 20:14:15.229042  483855 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32908 <nil> <nil>}
	I1014 20:14:15.229063  483855 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 20:14:15.497697  483855 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 20:14:15.497742  483855 machine.go:96] duration metric: took 4.069763695s to provisionDockerMachine
	I1014 20:14:15.497775  483855 start.go:293] postStartSetup for "ha-579393" (driver="docker")
	I1014 20:14:15.497793  483855 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 20:14:15.497866  483855 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 20:14:15.497946  483855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:14:15.517186  483855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:14:15.621698  483855 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 20:14:15.625589  483855 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1014 20:14:15.625615  483855 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1014 20:14:15.625644  483855 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-413763/.minikube/addons for local assets ...
	I1014 20:14:15.625730  483855 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-413763/.minikube/files for local assets ...
	I1014 20:14:15.625831  483855 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem -> 4173732.pem in /etc/ssl/certs
	I1014 20:14:15.625845  483855 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem -> /etc/ssl/certs/4173732.pem
	I1014 20:14:15.625954  483855 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 20:14:15.634004  483855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem --> /etc/ssl/certs/4173732.pem (1708 bytes)
	I1014 20:14:15.652848  483855 start.go:296] duration metric: took 155.05253ms for postStartSetup
	I1014 20:14:15.652947  483855 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1014 20:14:15.652999  483855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:14:15.671676  483855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:14:15.772231  483855 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1014 20:14:15.777428  483855 fix.go:56] duration metric: took 4.661453251s for fixHost
	I1014 20:14:15.777461  483855 start.go:83] releasing machines lock for "ha-579393", held for 4.661520575s
	I1014 20:14:15.777540  483855 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-579393
	I1014 20:14:15.795367  483855 ssh_runner.go:195] Run: cat /version.json
	I1014 20:14:15.795414  483855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:14:15.795438  483855 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 20:14:15.795537  483855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:14:15.813628  483855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:14:15.814338  483855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:14:15.974436  483855 ssh_runner.go:195] Run: systemctl --version
	I1014 20:14:15.981351  483855 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 20:14:16.017956  483855 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 20:14:16.023150  483855 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 20:14:16.023222  483855 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 20:14:16.031654  483855 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1014 20:14:16.031679  483855 start.go:495] detecting cgroup driver to use...
	I1014 20:14:16.031717  483855 detect.go:190] detected "systemd" cgroup driver on host os
	I1014 20:14:16.031802  483855 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 20:14:16.048436  483855 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 20:14:16.061476  483855 docker.go:218] disabling cri-docker service (if available) ...
	I1014 20:14:16.061544  483855 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 20:14:16.076780  483855 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 20:14:16.090012  483855 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 20:14:16.170317  483855 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 20:14:16.252741  483855 docker.go:234] disabling docker service ...
	I1014 20:14:16.252834  483855 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 20:14:16.268133  483855 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 20:14:16.281337  483855 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 20:14:16.362683  483855 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 20:14:16.445975  483855 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 20:14:16.459439  483855 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 20:14:16.474704  483855 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1014 20:14:16.474792  483855 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:14:16.484201  483855 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1014 20:14:16.484258  483855 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:14:16.493514  483855 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:14:16.502774  483855 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:14:16.511809  483855 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 20:14:16.520391  483855 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:14:16.529310  483855 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:14:16.538091  483855 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:14:16.547967  483855 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 20:14:16.555555  483855 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 20:14:16.562993  483855 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 20:14:16.640477  483855 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 20:14:16.746213  483855 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 20:14:16.746283  483855 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 20:14:16.750894  483855 start.go:563] Will wait 60s for crictl version
	I1014 20:14:16.750948  483855 ssh_runner.go:195] Run: which crictl
	I1014 20:14:16.754907  483855 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1014 20:14:16.780375  483855 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1014 20:14:16.780469  483855 ssh_runner.go:195] Run: crio --version
	I1014 20:14:16.809161  483855 ssh_runner.go:195] Run: crio --version
	I1014 20:14:16.841558  483855 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1014 20:14:16.843303  483855 cli_runner.go:164] Run: docker network inspect ha-579393 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1014 20:14:16.860993  483855 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1014 20:14:16.865603  483855 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 20:14:16.876543  483855 kubeadm.go:883] updating cluster {Name:ha-579393 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-579393 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClien
tPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 20:14:16.876681  483855 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 20:14:16.876735  483855 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 20:14:16.910114  483855 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 20:14:16.910136  483855 crio.go:433] Images already preloaded, skipping extraction
	I1014 20:14:16.910188  483855 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 20:14:16.938328  483855 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 20:14:16.938351  483855 cache_images.go:85] Images are preloaded, skipping loading
	I1014 20:14:16.938359  483855 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1014 20:14:16.938454  483855 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-579393 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-579393 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 20:14:16.938514  483855 ssh_runner.go:195] Run: crio config
	I1014 20:14:16.986141  483855 cni.go:84] Creating CNI manager for ""
	I1014 20:14:16.986163  483855 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1014 20:14:16.986185  483855 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1014 20:14:16.986206  483855 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-579393 NodeName:ha-579393 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1014 20:14:16.986342  483855 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-579393"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 20:14:16.986402  483855 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1014 20:14:16.994771  483855 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 20:14:16.994843  483855 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1014 20:14:17.002873  483855 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1014 20:14:17.016475  483855 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 20:14:17.030150  483855 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1014 20:14:17.044797  483855 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1014 20:14:17.048950  483855 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 20:14:17.059592  483855 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 20:14:17.138128  483855 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 20:14:17.162331  483855 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393 for IP: 192.168.49.2
	I1014 20:14:17.162363  483855 certs.go:195] generating shared ca certs ...
	I1014 20:14:17.162382  483855 certs.go:227] acquiring lock for ca certs: {Name:mk332cc83f1e1747534fc81e896bed94f24941d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:14:17.162522  483855 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.key
	I1014 20:14:17.162565  483855 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.key
	I1014 20:14:17.162572  483855 certs.go:257] generating profile certs ...
	I1014 20:14:17.162658  483855 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/client.key
	I1014 20:14:17.162681  483855 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key.d4ee92c1
	I1014 20:14:17.162721  483855 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt.d4ee92c1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1014 20:14:17.668666  483855 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt.d4ee92c1 ...
	I1014 20:14:17.668699  483855 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt.d4ee92c1: {Name:mk8a02e133127c09314986455d50a58a5753fa21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:14:17.668891  483855 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key.d4ee92c1 ...
	I1014 20:14:17.668912  483855 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key.d4ee92c1: {Name:mk7efe25de40b153ba4b5ad91ea2ae7247892281 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:14:17.669022  483855 certs.go:382] copying /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt.d4ee92c1 -> /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt
	I1014 20:14:17.669169  483855 certs.go:386] copying /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key.d4ee92c1 -> /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key
	I1014 20:14:17.669333  483855 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.key
	I1014 20:14:17.669353  483855 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1014 20:14:17.669372  483855 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1014 20:14:17.669388  483855 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1014 20:14:17.669407  483855 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1014 20:14:17.669426  483855 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1014 20:14:17.669444  483855 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1014 20:14:17.669462  483855 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1014 20:14:17.669483  483855 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1014 20:14:17.669558  483855 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/417373.pem (1338 bytes)
	W1014 20:14:17.669604  483855 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-413763/.minikube/certs/417373_empty.pem, impossibly tiny 0 bytes
	I1014 20:14:17.669618  483855 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca-key.pem (1675 bytes)
	I1014 20:14:17.669651  483855 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem (1078 bytes)
	I1014 20:14:17.669682  483855 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem (1123 bytes)
	I1014 20:14:17.669720  483855 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem (1675 bytes)
	I1014 20:14:17.669797  483855 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem (1708 bytes)
	I1014 20:14:17.669838  483855 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:14:17.669858  483855 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/417373.pem -> /usr/share/ca-certificates/417373.pem
	I1014 20:14:17.669877  483855 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem -> /usr/share/ca-certificates/4173732.pem
	I1014 20:14:17.670422  483855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 20:14:17.694853  483855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1014 20:14:17.714851  483855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 20:14:17.735389  483855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1014 20:14:17.753540  483855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1014 20:14:17.771256  483855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1014 20:14:17.788532  483855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 20:14:17.806351  483855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1014 20:14:17.824287  483855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 20:14:17.842307  483855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/certs/417373.pem --> /usr/share/ca-certificates/417373.pem (1338 bytes)
	I1014 20:14:17.860337  483855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem --> /usr/share/ca-certificates/4173732.pem (1708 bytes)
	I1014 20:14:17.877985  483855 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 20:14:17.891323  483855 ssh_runner.go:195] Run: openssl version
	I1014 20:14:17.898069  483855 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 20:14:17.907081  483855 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:14:17.911053  483855 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 19:15 /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:14:17.911125  483855 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:14:17.945120  483855 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 20:14:17.953947  483855 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/417373.pem && ln -fs /usr/share/ca-certificates/417373.pem /etc/ssl/certs/417373.pem"
	I1014 20:14:17.962658  483855 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/417373.pem
	I1014 20:14:17.966544  483855 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 19:32 /usr/share/ca-certificates/417373.pem
	I1014 20:14:17.966600  483855 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/417373.pem
	I1014 20:14:18.000895  483855 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/417373.pem /etc/ssl/certs/51391683.0"
	I1014 20:14:18.009773  483855 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4173732.pem && ln -fs /usr/share/ca-certificates/4173732.pem /etc/ssl/certs/4173732.pem"
	I1014 20:14:18.018643  483855 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4173732.pem
	I1014 20:14:18.022518  483855 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 19:32 /usr/share/ca-certificates/4173732.pem
	I1014 20:14:18.022596  483855 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4173732.pem
	I1014 20:14:18.057289  483855 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4173732.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 20:14:18.065942  483855 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 20:14:18.070012  483855 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1014 20:14:18.104032  483855 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1014 20:14:18.138282  483855 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1014 20:14:18.173092  483855 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1014 20:14:18.207994  483855 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1014 20:14:18.243571  483855 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1014 20:14:18.292429  483855 kubeadm.go:400] StartCluster: {Name:ha-579393 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-579393 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPa
th: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 20:14:18.292512  483855 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 20:14:18.292580  483855 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 20:14:18.321112  483855 cri.go:89] found id: ""
	I1014 20:14:18.321184  483855 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 20:14:18.329855  483855 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1014 20:14:18.329881  483855 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1014 20:14:18.329939  483855 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1014 20:14:18.337735  483855 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1014 20:14:18.338246  483855 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-579393" does not appear in /home/jenkins/minikube-integration/21409-413763/kubeconfig
	I1014 20:14:18.338385  483855 kubeconfig.go:62] /home/jenkins/minikube-integration/21409-413763/kubeconfig needs updating (will repair): [kubeconfig missing "ha-579393" cluster setting kubeconfig missing "ha-579393" context setting]
	I1014 20:14:18.338768  483855 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/kubeconfig: {Name:mk8f52e50cf00a1f90024c40a36e49753857e33b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:14:18.339441  483855 kapi.go:59] client config for ha-579393: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/client.crt", KeyFile:"/home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/client.key", CAFile:"/home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1014 20:14:18.340024  483855 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1014 20:14:18.340045  483855 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1014 20:14:18.340052  483855 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1014 20:14:18.340058  483855 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1014 20:14:18.340056  483855 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1014 20:14:18.340064  483855 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1014 20:14:18.340494  483855 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1014 20:14:18.348268  483855 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1014 20:14:18.348302  483855 kubeadm.go:601] duration metric: took 18.415642ms to restartPrimaryControlPlane
	I1014 20:14:18.348309  483855 kubeadm.go:402] duration metric: took 55.890314ms to StartCluster
	I1014 20:14:18.348323  483855 settings.go:142] acquiring lock: {Name:mke99e63954bc0385c76f9fa1a80091fa7740a6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:14:18.348383  483855 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21409-413763/kubeconfig
	I1014 20:14:18.348891  483855 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/kubeconfig: {Name:mk8f52e50cf00a1f90024c40a36e49753857e33b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:14:18.349121  483855 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 20:14:18.349176  483855 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1014 20:14:18.349275  483855 addons.go:69] Setting storage-provisioner=true in profile "ha-579393"
	I1014 20:14:18.349292  483855 addons.go:69] Setting default-storageclass=true in profile "ha-579393"
	I1014 20:14:18.349334  483855 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-579393"
	I1014 20:14:18.349340  483855 config.go:182] Loaded profile config "ha-579393": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:14:18.349299  483855 addons.go:238] Setting addon storage-provisioner=true in "ha-579393"
	I1014 20:14:18.349402  483855 host.go:66] Checking if "ha-579393" exists ...
	I1014 20:14:18.349612  483855 cli_runner.go:164] Run: docker container inspect ha-579393 --format={{.State.Status}}
	I1014 20:14:18.349733  483855 cli_runner.go:164] Run: docker container inspect ha-579393 --format={{.State.Status}}
	I1014 20:14:18.351949  483855 out.go:179] * Verifying Kubernetes components...
	I1014 20:14:18.353493  483855 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 20:14:18.370229  483855 kapi.go:59] client config for ha-579393: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/client.crt", KeyFile:"/home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/client.key", CAFile:"/home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1014 20:14:18.370618  483855 addons.go:238] Setting addon default-storageclass=true in "ha-579393"
	I1014 20:14:18.370669  483855 host.go:66] Checking if "ha-579393" exists ...
	I1014 20:14:18.371116  483855 cli_runner.go:164] Run: docker container inspect ha-579393 --format={{.State.Status}}
	I1014 20:14:18.372806  483855 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 20:14:18.374593  483855 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 20:14:18.374613  483855 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1014 20:14:18.374661  483855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:14:18.394050  483855 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1014 20:14:18.394079  483855 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1014 20:14:18.394142  483855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:14:18.400808  483855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:14:18.415928  483855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:14:18.464479  483855 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 20:14:18.478392  483855 node_ready.go:35] waiting up to 6m0s for node "ha-579393" to be "Ready" ...
	I1014 20:14:18.511484  483855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 20:14:18.526970  483855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:14:18.569362  483855 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:18.569451  483855 retry.go:31] will retry after 139.387299ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:14:18.586526  483855 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:18.586563  483855 retry.go:31] will retry after 373.05987ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:18.709958  483855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1014 20:14:18.765681  483855 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:18.765716  483855 retry.go:31] will retry after 437.429458ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:18.960052  483855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:14:19.015690  483855 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:19.015732  483855 retry.go:31] will retry after 493.852226ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:19.204088  483855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1014 20:14:19.257288  483855 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:19.257326  483855 retry.go:31] will retry after 639.980295ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:19.510433  483855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:14:19.566057  483855 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:19.566095  483855 retry.go:31] will retry after 314.039838ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:19.880614  483855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1014 20:14:19.898421  483855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1014 20:14:19.942792  483855 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:19.942837  483855 retry.go:31] will retry after 998.489046ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:14:19.959292  483855 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:19.959332  483855 retry.go:31] will retry after 630.832334ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:14:20.480037  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:14:20.591264  483855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1014 20:14:20.646581  483855 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:20.646618  483855 retry.go:31] will retry after 1.015213679s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:20.942176  483855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:14:20.995773  483855 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:20.995805  483855 retry.go:31] will retry after 1.667312943s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:21.662122  483855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1014 20:14:21.716333  483855 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:21.716374  483855 retry.go:31] will retry after 2.064978127s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:22.663519  483855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:14:22.718117  483855 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:22.718148  483855 retry.go:31] will retry after 1.187936777s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:14:22.979048  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:14:23.781666  483855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1014 20:14:23.836207  483855 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:23.836236  483855 retry.go:31] will retry after 4.211845068s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:23.906464  483855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:14:23.962253  483855 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:23.962286  483855 retry.go:31] will retry after 1.446389172s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:14:24.979293  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:14:25.408845  483855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:14:25.464280  483855 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:25.464314  483855 retry.go:31] will retry after 4.913115671s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:14:26.979811  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:14:28.048551  483855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1014 20:14:28.103870  483855 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:28.103926  483855 retry.go:31] will retry after 5.848016942s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:14:29.479152  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:14:30.377796  483855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:14:30.434283  483855 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:30.434318  483855 retry.go:31] will retry after 3.766557474s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:14:31.479517  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:14:33.953096  483855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1014 20:14:33.979303  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:14:34.008898  483855 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:34.008931  483855 retry.go:31] will retry after 9.465931342s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:34.201242  483855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:14:34.257447  483855 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:34.257479  483855 retry.go:31] will retry after 6.854944728s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:14:35.979488  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:14:37.979541  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:14:40.479409  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:14:41.113186  483855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:14:41.169845  483855 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:41.169886  483855 retry.go:31] will retry after 7.326807796s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:14:42.480113  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:14:43.475406  483855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1014 20:14:43.532885  483855 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:43.532922  483855 retry.go:31] will retry after 5.727455615s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:14:44.979266  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:14:47.479387  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:14:48.497090  483855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:14:48.552250  483855 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:48.552292  483855 retry.go:31] will retry after 19.686847261s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:49.260622  483855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1014 20:14:49.315681  483855 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:49.315717  483855 retry.go:31] will retry after 10.36859919s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:14:49.479476  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:14:51.479855  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:14:53.979220  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:14:55.979988  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:14:58.479745  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:14:59.685187  483855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1014 20:14:59.739249  483855 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:59.739293  483855 retry.go:31] will retry after 17.039426961s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:15:00.979569  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:15:02.980017  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:15:05.479408  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:15:07.479974  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:15:08.239444  483855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:15:08.295601  483855 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:15:08.295641  483855 retry.go:31] will retry after 35.811237481s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:15:09.979273  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:15:12.479310  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:15:14.480040  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:15:16.779044  483855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1014 20:15:16.836669  483855 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:15:16.836718  483855 retry.go:31] will retry after 20.079248911s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:15:16.979513  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:15:19.479421  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:15:21.979452  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:15:24.479358  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:15:26.979383  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:15:29.479315  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:15:31.979207  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:15:33.979957  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:15:36.479122  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:15:36.916703  483855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1014 20:15:36.972468  483855 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:15:36.972659  483855 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1014 20:15:38.479432  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:15:40.479519  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:15:42.979247  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:15:44.107661  483855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:15:44.164144  483855 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:15:44.164298  483855 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1014 20:15:44.166449  483855 out.go:179] * Enabled addons: 
	I1014 20:15:44.168164  483855 addons.go:514] duration metric: took 1m25.81898405s for enable addons: enabled=[]
	W1014 20:15:45.479183  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:15:47.479310  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:15:49.479858  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:15:51.979311  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:15:54.479230  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:15:56.479720  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:15:58.979411  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:16:00.979559  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:16:02.979849  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:16:05.479140  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:16:07.479361  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:16:09.979382  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:16:11.979429  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:16:14.479378  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:16:16.479725  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:16:18.480011  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:16:20.980035  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:16:23.479551  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:16:25.979494  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:16:28.479551  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:16:30.979602  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:16:33.479461  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:16:35.979497  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:16:38.479632  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:16:40.979546  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:16:43.479475  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:16:45.979520  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:16:48.479620  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:16:50.979569  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:16:53.479264  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:16:55.480008  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:16:57.979251  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:17:00.479382  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:17:02.979448  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:17:05.479351  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:17:07.479918  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:17:09.979503  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:17:12.479497  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:17:14.979391  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:17:17.479309  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:17:19.979174  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:17:21.979970  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:17:24.479629  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:17:26.979621  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:17:29.479384  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:17:31.479472  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:17:33.979336  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:17:36.479209  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:17:38.479928  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:17:40.979700  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:17:43.479435  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:17:45.480477  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:17:47.979204  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:17:49.979875  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:17:51.979997  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:17:54.479887  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:17:56.979877  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:17:59.480077  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:18:01.979036  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:18:03.979683  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:18:06.479668  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:18:08.479831  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:18:10.979604  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:18:13.479302  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:18:15.479879  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:18:17.979308  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:18:20.479378  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:18:22.979289  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:18:24.979637  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:18:27.479437  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:18:29.479998  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:18:31.979096  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:18:33.979838  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:18:36.479742  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:18:38.979613  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:18:41.479436  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:18:43.979207  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:18:46.479173  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:18:48.479845  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:18:50.979809  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:18:53.479746  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:18:55.979859  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:18:58.479881  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:19:00.979573  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:19:03.479418  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:19:05.479980  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:19:07.979038  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:19:09.979338  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:19:11.979815  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:19:14.479700  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:19:16.979525  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:19:19.479424  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:19:21.979395  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:19:24.479442  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:19:26.479811  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:19:28.979862  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:19:31.479976  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:19:33.979862  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:19:36.479814  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:19:38.979789  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:19:41.479881  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:19:43.979978  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:19:46.479123  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:19:48.479924  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:19:50.979839  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:19:53.479871  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:19:55.979681  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:19:58.479612  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:20:00.979818  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:20:03.479734  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:20:05.979638  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:20:08.479613  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:20:10.979828  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:20:13.479594  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:20:15.979749  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:20:18.479349  483855 node_ready.go:38] duration metric: took 6m0.000903914s for node "ha-579393" to be "Ready" ...
	I1014 20:20:18.482302  483855 out.go:203] 
	W1014 20:20:18.483783  483855 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1014 20:20:18.483798  483855 out.go:285] * 
	* 
	W1014 20:20:18.485408  483855 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 20:20:18.486562  483855 out.go:203] 

                                                
                                                
** /stderr **
ha_test.go:471: failed to run minikube start. args "out/minikube-linux-amd64 -p ha-579393 node list --alsologtostderr -v 5" : exit status 80
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-579393 node list --alsologtostderr -v 5
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-579393
helpers_test.go:243: (dbg) docker inspect ha-579393:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2",
	        "Created": "2025-10-14T20:03:22.416166993Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 484051,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-14T20:14:11.163807444Z",
	            "FinishedAt": "2025-10-14T20:14:10.012215428Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2/hostname",
	        "HostsPath": "/var/lib/docker/containers/e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2/hosts",
	        "LogPath": "/var/lib/docker/containers/e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2/e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2-json.log",
	        "Name": "/ha-579393",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-579393:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-579393",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2",
	                "LowerDir": "/var/lib/docker/overlay2/f2862f7f555e5163c5fb738e59e646f87e5b6f6dc38621bbd5cf8a3ccf2dc40d-init/diff:/var/lib/docker/overlay2/51cca15559205c73df9571f03495ed8a29f085405673af5fef2c1ba7c695d8af/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f2862f7f555e5163c5fb738e59e646f87e5b6f6dc38621bbd5cf8a3ccf2dc40d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f2862f7f555e5163c5fb738e59e646f87e5b6f6dc38621bbd5cf8a3ccf2dc40d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f2862f7f555e5163c5fb738e59e646f87e5b6f6dc38621bbd5cf8a3ccf2dc40d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-579393",
	                "Source": "/var/lib/docker/volumes/ha-579393/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-579393",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-579393",
	                "name.minikube.sigs.k8s.io": "ha-579393",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8e28d83e308e6af894f39de197ae2094de94e5854c96689c87348eaa90862b2d",
	            "SandboxKey": "/var/run/docker/netns/8e28d83e308e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32908"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32909"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32912"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32910"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32911"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-579393": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ee:4c:9d:2b:25:15",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d686e445c44d566d468791372c74214f3cfc87adaf2fbedd68822ff7d3ce8955",
	                    "EndpointID": "a2ff963b7135d73b1a96b4db82df9afe9bb179109a039548974da4d367372ba7",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-579393",
	                        "e999454f4483"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-579393 -n ha-579393
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-579393 -n ha-579393: exit status 2 (313.399651ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-579393 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                      ARGS                                                       │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ ha-579393 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:03 UTC │                     │
	│ kubectl │ ha-579393 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml                                                │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl │ ha-579393 kubectl -- rollout status deployment/busybox                                                          │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:12 UTC │                     │
	│ kubectl │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:12 UTC │                     │
	│ kubectl │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:12 UTC │                     │
	│ kubectl │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ kubectl │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ kubectl │ ha-579393 kubectl -- exec  -- nslookup kubernetes.io                                                            │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ kubectl │ ha-579393 kubectl -- exec  -- nslookup kubernetes.default                                                       │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ kubectl │ ha-579393 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                                     │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ kubectl │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ node    │ ha-579393 node add --alsologtostderr -v 5                                                                       │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ node    │ ha-579393 node stop m02 --alsologtostderr -v 5                                                                  │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ node    │ ha-579393 node start m02 --alsologtostderr -v 5                                                                 │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ node    │ ha-579393 node list --alsologtostderr -v 5                                                                      │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:14 UTC │                     │
	│ stop    │ ha-579393 stop --alsologtostderr -v 5                                                                           │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:14 UTC │ 14 Oct 25 20:14 UTC │
	│ start   │ ha-579393 start --wait true --alsologtostderr -v 5                                                              │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:14 UTC │                     │
	│ node    │ ha-579393 node list --alsologtostderr -v 5                                                                      │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:20 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/14 20:14:10
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1014 20:14:10.920500  483855 out.go:360] Setting OutFile to fd 1 ...
	I1014 20:14:10.920744  483855 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:14:10.920765  483855 out.go:374] Setting ErrFile to fd 2...
	I1014 20:14:10.920770  483855 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:14:10.920950  483855 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-413763/.minikube/bin
	I1014 20:14:10.921400  483855 out.go:368] Setting JSON to false
	I1014 20:14:10.922307  483855 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":10597,"bootTime":1760462254,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1014 20:14:10.922423  483855 start.go:141] virtualization: kvm guest
	I1014 20:14:10.924678  483855 out.go:179] * [ha-579393] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1014 20:14:10.925922  483855 out.go:179]   - MINIKUBE_LOCATION=21409
	I1014 20:14:10.925932  483855 notify.go:220] Checking for updates...
	I1014 20:14:10.928150  483855 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 20:14:10.929578  483855 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-413763/kubeconfig
	I1014 20:14:10.931110  483855 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-413763/.minikube
	I1014 20:14:10.932372  483855 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1014 20:14:10.933593  483855 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 20:14:10.935251  483855 config.go:182] Loaded profile config "ha-579393": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:14:10.935376  483855 driver.go:421] Setting default libvirt URI to qemu:///system
	I1014 20:14:10.960161  483855 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1014 20:14:10.960301  483855 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 20:14:11.020772  483855 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-14 20:14:11.009250952 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1014 20:14:11.020894  483855 docker.go:318] overlay module found
	I1014 20:14:11.022935  483855 out.go:179] * Using the docker driver based on existing profile
	I1014 20:14:11.024198  483855 start.go:305] selected driver: docker
	I1014 20:14:11.024214  483855 start.go:925] validating driver "docker" against &{Name:ha-579393 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-579393 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 20:14:11.024304  483855 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 20:14:11.024438  483855 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 20:14:11.086893  483855 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-14 20:14:11.076411866 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1014 20:14:11.087678  483855 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 20:14:11.087721  483855 cni.go:84] Creating CNI manager for ""
	I1014 20:14:11.087800  483855 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1014 20:14:11.087868  483855 start.go:349] cluster config:
	{Name:ha-579393 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-579393 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GP
Us: AutoPauseInterval:1m0s}
	I1014 20:14:11.090005  483855 out.go:179] * Starting "ha-579393" primary control-plane node in "ha-579393" cluster
	I1014 20:14:11.091314  483855 cache.go:123] Beginning downloading kic base image for docker with crio
	I1014 20:14:11.092803  483855 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1014 20:14:11.094111  483855 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 20:14:11.094148  483855 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-413763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1014 20:14:11.094156  483855 cache.go:58] Caching tarball of preloaded images
	I1014 20:14:11.094218  483855 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1014 20:14:11.094241  483855 preload.go:233] Found /home/jenkins/minikube-integration/21409-413763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1014 20:14:11.094277  483855 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1014 20:14:11.094382  483855 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/config.json ...
	I1014 20:14:11.115783  483855 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1014 20:14:11.115808  483855 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1014 20:14:11.115828  483855 cache.go:232] Successfully downloaded all kic artifacts
	I1014 20:14:11.115855  483855 start.go:360] acquireMachinesLock for ha-579393: {Name:mk1f05a584fb3418ecbc78031a467c0e06c7a892 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 20:14:11.115928  483855 start.go:364] duration metric: took 47.72µs to acquireMachinesLock for "ha-579393"
	I1014 20:14:11.115949  483855 start.go:96] Skipping create...Using existing machine configuration
	I1014 20:14:11.115957  483855 fix.go:54] fixHost starting: 
	I1014 20:14:11.116246  483855 cli_runner.go:164] Run: docker container inspect ha-579393 --format={{.State.Status}}
	I1014 20:14:11.133733  483855 fix.go:112] recreateIfNeeded on ha-579393: state=Stopped err=<nil>
	W1014 20:14:11.133784  483855 fix.go:138] unexpected machine state, will restart: <nil>
	I1014 20:14:11.135788  483855 out.go:252] * Restarting existing docker container for "ha-579393" ...
	I1014 20:14:11.135872  483855 cli_runner.go:164] Run: docker start ha-579393
	I1014 20:14:11.385558  483855 cli_runner.go:164] Run: docker container inspect ha-579393 --format={{.State.Status}}
	I1014 20:14:11.406160  483855 kic.go:430] container "ha-579393" state is running.
	I1014 20:14:11.406595  483855 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-579393
	I1014 20:14:11.427601  483855 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/config.json ...
	I1014 20:14:11.427957  483855 machine.go:93] provisionDockerMachine start ...
	I1014 20:14:11.428045  483855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:14:11.447692  483855 main.go:141] libmachine: Using SSH client type: native
	I1014 20:14:11.447993  483855 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32908 <nil> <nil>}
	I1014 20:14:11.448015  483855 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 20:14:11.448627  483855 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:43868->127.0.0.1:32908: read: connection reset by peer
	I1014 20:14:14.598160  483855 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-579393
	
	I1014 20:14:14.598192  483855 ubuntu.go:182] provisioning hostname "ha-579393"
	I1014 20:14:14.598246  483855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:14:14.616421  483855 main.go:141] libmachine: Using SSH client type: native
	I1014 20:14:14.616660  483855 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32908 <nil> <nil>}
	I1014 20:14:14.616677  483855 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-579393 && echo "ha-579393" | sudo tee /etc/hostname
	I1014 20:14:14.772817  483855 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-579393
	
	I1014 20:14:14.772902  483855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:14:14.791583  483855 main.go:141] libmachine: Using SSH client type: native
	I1014 20:14:14.791976  483855 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32908 <nil> <nil>}
	I1014 20:14:14.792005  483855 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-579393' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-579393/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-579393' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 20:14:14.941113  483855 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 20:14:14.941153  483855 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-413763/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-413763/.minikube}
	I1014 20:14:14.941182  483855 ubuntu.go:190] setting up certificates
	I1014 20:14:14.941192  483855 provision.go:84] configureAuth start
	I1014 20:14:14.941248  483855 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-579393
	I1014 20:14:14.959540  483855 provision.go:143] copyHostCerts
	I1014 20:14:14.959581  483855 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem
	I1014 20:14:14.959610  483855 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem, removing ...
	I1014 20:14:14.959626  483855 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem
	I1014 20:14:14.959736  483855 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem (1078 bytes)
	I1014 20:14:14.959861  483855 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem
	I1014 20:14:14.959885  483855 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem, removing ...
	I1014 20:14:14.959890  483855 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem
	I1014 20:14:14.959924  483855 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem (1123 bytes)
	I1014 20:14:14.959979  483855 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem
	I1014 20:14:14.959996  483855 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem, removing ...
	I1014 20:14:14.960003  483855 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem
	I1014 20:14:14.960029  483855 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem (1675 bytes)
	I1014 20:14:14.960082  483855 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca-key.pem org=jenkins.ha-579393 san=[127.0.0.1 192.168.49.2 ha-579393 localhost minikube]
	I1014 20:14:15.029188  483855 provision.go:177] copyRemoteCerts
	I1014 20:14:15.029258  483855 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 20:14:15.029297  483855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:14:15.048357  483855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:14:15.153017  483855 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1014 20:14:15.153075  483855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 20:14:15.172076  483855 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1014 20:14:15.172147  483855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1014 20:14:15.191156  483855 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1014 20:14:15.191247  483855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1014 20:14:15.209919  483855 provision.go:87] duration metric: took 268.700795ms to configureAuth
	I1014 20:14:15.209952  483855 ubuntu.go:206] setting minikube options for container-runtime
	I1014 20:14:15.210139  483855 config.go:182] Loaded profile config "ha-579393": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:14:15.210238  483855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:14:15.228740  483855 main.go:141] libmachine: Using SSH client type: native
	I1014 20:14:15.229042  483855 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32908 <nil> <nil>}
	I1014 20:14:15.229063  483855 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 20:14:15.497697  483855 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 20:14:15.497742  483855 machine.go:96] duration metric: took 4.069763695s to provisionDockerMachine
	I1014 20:14:15.497775  483855 start.go:293] postStartSetup for "ha-579393" (driver="docker")
	I1014 20:14:15.497793  483855 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 20:14:15.497866  483855 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 20:14:15.497946  483855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:14:15.517186  483855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:14:15.621698  483855 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 20:14:15.625589  483855 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1014 20:14:15.625615  483855 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1014 20:14:15.625644  483855 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-413763/.minikube/addons for local assets ...
	I1014 20:14:15.625730  483855 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-413763/.minikube/files for local assets ...
	I1014 20:14:15.625831  483855 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem -> 4173732.pem in /etc/ssl/certs
	I1014 20:14:15.625845  483855 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem -> /etc/ssl/certs/4173732.pem
	I1014 20:14:15.625954  483855 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 20:14:15.634004  483855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem --> /etc/ssl/certs/4173732.pem (1708 bytes)
	I1014 20:14:15.652848  483855 start.go:296] duration metric: took 155.05253ms for postStartSetup
	I1014 20:14:15.652947  483855 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1014 20:14:15.652999  483855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:14:15.671676  483855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:14:15.772231  483855 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1014 20:14:15.777428  483855 fix.go:56] duration metric: took 4.661453251s for fixHost
	I1014 20:14:15.777461  483855 start.go:83] releasing machines lock for "ha-579393", held for 4.661520575s
	I1014 20:14:15.777540  483855 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-579393
	I1014 20:14:15.795367  483855 ssh_runner.go:195] Run: cat /version.json
	I1014 20:14:15.795414  483855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:14:15.795438  483855 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 20:14:15.795537  483855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:14:15.813628  483855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:14:15.814338  483855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:14:15.974436  483855 ssh_runner.go:195] Run: systemctl --version
	I1014 20:14:15.981351  483855 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 20:14:16.017956  483855 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 20:14:16.023150  483855 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 20:14:16.023222  483855 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 20:14:16.031654  483855 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1014 20:14:16.031679  483855 start.go:495] detecting cgroup driver to use...
	I1014 20:14:16.031717  483855 detect.go:190] detected "systemd" cgroup driver on host os
	I1014 20:14:16.031802  483855 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 20:14:16.048436  483855 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 20:14:16.061476  483855 docker.go:218] disabling cri-docker service (if available) ...
	I1014 20:14:16.061544  483855 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 20:14:16.076780  483855 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 20:14:16.090012  483855 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 20:14:16.170317  483855 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 20:14:16.252741  483855 docker.go:234] disabling docker service ...
	I1014 20:14:16.252834  483855 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 20:14:16.268133  483855 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 20:14:16.281337  483855 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 20:14:16.362683  483855 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 20:14:16.445975  483855 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 20:14:16.459439  483855 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 20:14:16.474704  483855 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1014 20:14:16.474792  483855 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:14:16.484201  483855 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1014 20:14:16.484258  483855 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:14:16.493514  483855 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:14:16.502774  483855 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:14:16.511809  483855 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 20:14:16.520391  483855 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:14:16.529310  483855 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:14:16.538091  483855 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:14:16.547967  483855 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 20:14:16.555555  483855 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 20:14:16.562993  483855 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 20:14:16.640477  483855 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 20:14:16.746213  483855 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 20:14:16.746283  483855 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 20:14:16.750894  483855 start.go:563] Will wait 60s for crictl version
	I1014 20:14:16.750948  483855 ssh_runner.go:195] Run: which crictl
	I1014 20:14:16.754907  483855 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1014 20:14:16.780375  483855 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1014 20:14:16.780469  483855 ssh_runner.go:195] Run: crio --version
	I1014 20:14:16.809161  483855 ssh_runner.go:195] Run: crio --version
	I1014 20:14:16.841558  483855 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1014 20:14:16.843303  483855 cli_runner.go:164] Run: docker network inspect ha-579393 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1014 20:14:16.860993  483855 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1014 20:14:16.865603  483855 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 20:14:16.876543  483855 kubeadm.go:883] updating cluster {Name:ha-579393 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-579393 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClien
tPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 20:14:16.876681  483855 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 20:14:16.876735  483855 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 20:14:16.910114  483855 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 20:14:16.910136  483855 crio.go:433] Images already preloaded, skipping extraction
	I1014 20:14:16.910188  483855 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 20:14:16.938328  483855 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 20:14:16.938351  483855 cache_images.go:85] Images are preloaded, skipping loading
	I1014 20:14:16.938359  483855 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1014 20:14:16.938454  483855 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-579393 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-579393 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 20:14:16.938514  483855 ssh_runner.go:195] Run: crio config
	I1014 20:14:16.986141  483855 cni.go:84] Creating CNI manager for ""
	I1014 20:14:16.986163  483855 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1014 20:14:16.986185  483855 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1014 20:14:16.986206  483855 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-579393 NodeName:ha-579393 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1014 20:14:16.986342  483855 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-579393"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 20:14:16.986402  483855 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1014 20:14:16.994771  483855 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 20:14:16.994843  483855 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1014 20:14:17.002873  483855 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1014 20:14:17.016475  483855 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 20:14:17.030150  483855 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1014 20:14:17.044797  483855 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1014 20:14:17.048950  483855 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 20:14:17.059592  483855 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 20:14:17.138128  483855 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 20:14:17.162331  483855 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393 for IP: 192.168.49.2
	I1014 20:14:17.162363  483855 certs.go:195] generating shared ca certs ...
	I1014 20:14:17.162382  483855 certs.go:227] acquiring lock for ca certs: {Name:mk332cc83f1e1747534fc81e896bed94f24941d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:14:17.162522  483855 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.key
	I1014 20:14:17.162565  483855 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.key
	I1014 20:14:17.162572  483855 certs.go:257] generating profile certs ...
	I1014 20:14:17.162658  483855 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/client.key
	I1014 20:14:17.162681  483855 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key.d4ee92c1
	I1014 20:14:17.162721  483855 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt.d4ee92c1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1014 20:14:17.668666  483855 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt.d4ee92c1 ...
	I1014 20:14:17.668699  483855 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt.d4ee92c1: {Name:mk8a02e133127c09314986455d50a58a5753fa21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:14:17.668891  483855 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key.d4ee92c1 ...
	I1014 20:14:17.668912  483855 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key.d4ee92c1: {Name:mk7efe25de40b153ba4b5ad91ea2ae7247892281 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:14:17.669022  483855 certs.go:382] copying /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt.d4ee92c1 -> /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt
	I1014 20:14:17.669169  483855 certs.go:386] copying /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key.d4ee92c1 -> /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key
	I1014 20:14:17.669333  483855 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.key
	I1014 20:14:17.669353  483855 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1014 20:14:17.669372  483855 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1014 20:14:17.669388  483855 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1014 20:14:17.669407  483855 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1014 20:14:17.669426  483855 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1014 20:14:17.669444  483855 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1014 20:14:17.669462  483855 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1014 20:14:17.669483  483855 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1014 20:14:17.669558  483855 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/417373.pem (1338 bytes)
	W1014 20:14:17.669604  483855 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-413763/.minikube/certs/417373_empty.pem, impossibly tiny 0 bytes
	I1014 20:14:17.669618  483855 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca-key.pem (1675 bytes)
	I1014 20:14:17.669651  483855 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem (1078 bytes)
	I1014 20:14:17.669682  483855 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem (1123 bytes)
	I1014 20:14:17.669720  483855 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem (1675 bytes)
	I1014 20:14:17.669797  483855 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem (1708 bytes)
	I1014 20:14:17.669838  483855 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:14:17.669858  483855 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/417373.pem -> /usr/share/ca-certificates/417373.pem
	I1014 20:14:17.669877  483855 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem -> /usr/share/ca-certificates/4173732.pem
	I1014 20:14:17.670422  483855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 20:14:17.694853  483855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1014 20:14:17.714851  483855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 20:14:17.735389  483855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1014 20:14:17.753540  483855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1014 20:14:17.771256  483855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1014 20:14:17.788532  483855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 20:14:17.806351  483855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1014 20:14:17.824287  483855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 20:14:17.842307  483855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/certs/417373.pem --> /usr/share/ca-certificates/417373.pem (1338 bytes)
	I1014 20:14:17.860337  483855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem --> /usr/share/ca-certificates/4173732.pem (1708 bytes)
	I1014 20:14:17.877985  483855 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 20:14:17.891323  483855 ssh_runner.go:195] Run: openssl version
	I1014 20:14:17.898069  483855 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 20:14:17.907081  483855 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:14:17.911053  483855 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 19:15 /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:14:17.911125  483855 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:14:17.945120  483855 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 20:14:17.953947  483855 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/417373.pem && ln -fs /usr/share/ca-certificates/417373.pem /etc/ssl/certs/417373.pem"
	I1014 20:14:17.962658  483855 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/417373.pem
	I1014 20:14:17.966544  483855 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 19:32 /usr/share/ca-certificates/417373.pem
	I1014 20:14:17.966600  483855 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/417373.pem
	I1014 20:14:18.000895  483855 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/417373.pem /etc/ssl/certs/51391683.0"
	I1014 20:14:18.009773  483855 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4173732.pem && ln -fs /usr/share/ca-certificates/4173732.pem /etc/ssl/certs/4173732.pem"
	I1014 20:14:18.018643  483855 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4173732.pem
	I1014 20:14:18.022518  483855 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 19:32 /usr/share/ca-certificates/4173732.pem
	I1014 20:14:18.022596  483855 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4173732.pem
	I1014 20:14:18.057289  483855 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4173732.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 20:14:18.065942  483855 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 20:14:18.070012  483855 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1014 20:14:18.104032  483855 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1014 20:14:18.138282  483855 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1014 20:14:18.173092  483855 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1014 20:14:18.207994  483855 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1014 20:14:18.243571  483855 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1014 20:14:18.292429  483855 kubeadm.go:400] StartCluster: {Name:ha-579393 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-579393 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPa
th: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 20:14:18.292512  483855 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 20:14:18.292580  483855 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 20:14:18.321112  483855 cri.go:89] found id: ""
	I1014 20:14:18.321184  483855 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 20:14:18.329855  483855 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1014 20:14:18.329881  483855 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1014 20:14:18.329939  483855 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1014 20:14:18.337735  483855 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1014 20:14:18.338246  483855 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-579393" does not appear in /home/jenkins/minikube-integration/21409-413763/kubeconfig
	I1014 20:14:18.338385  483855 kubeconfig.go:62] /home/jenkins/minikube-integration/21409-413763/kubeconfig needs updating (will repair): [kubeconfig missing "ha-579393" cluster setting kubeconfig missing "ha-579393" context setting]
	I1014 20:14:18.338768  483855 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/kubeconfig: {Name:mk8f52e50cf00a1f90024c40a36e49753857e33b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:14:18.339441  483855 kapi.go:59] client config for ha-579393: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/client.crt", KeyFile:"/home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/client.key", CAFile:"/home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1014 20:14:18.340024  483855 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1014 20:14:18.340045  483855 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1014 20:14:18.340052  483855 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1014 20:14:18.340058  483855 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1014 20:14:18.340056  483855 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1014 20:14:18.340064  483855 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1014 20:14:18.340494  483855 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1014 20:14:18.348268  483855 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1014 20:14:18.348302  483855 kubeadm.go:601] duration metric: took 18.415642ms to restartPrimaryControlPlane
	I1014 20:14:18.348309  483855 kubeadm.go:402] duration metric: took 55.890314ms to StartCluster
	I1014 20:14:18.348323  483855 settings.go:142] acquiring lock: {Name:mke99e63954bc0385c76f9fa1a80091fa7740a6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:14:18.348383  483855 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21409-413763/kubeconfig
	I1014 20:14:18.348891  483855 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/kubeconfig: {Name:mk8f52e50cf00a1f90024c40a36e49753857e33b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:14:18.349121  483855 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 20:14:18.349176  483855 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1014 20:14:18.349275  483855 addons.go:69] Setting storage-provisioner=true in profile "ha-579393"
	I1014 20:14:18.349292  483855 addons.go:69] Setting default-storageclass=true in profile "ha-579393"
	I1014 20:14:18.349334  483855 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-579393"
	I1014 20:14:18.349340  483855 config.go:182] Loaded profile config "ha-579393": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:14:18.349299  483855 addons.go:238] Setting addon storage-provisioner=true in "ha-579393"
	I1014 20:14:18.349402  483855 host.go:66] Checking if "ha-579393" exists ...
	I1014 20:14:18.349612  483855 cli_runner.go:164] Run: docker container inspect ha-579393 --format={{.State.Status}}
	I1014 20:14:18.349733  483855 cli_runner.go:164] Run: docker container inspect ha-579393 --format={{.State.Status}}
	I1014 20:14:18.351949  483855 out.go:179] * Verifying Kubernetes components...
	I1014 20:14:18.353493  483855 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 20:14:18.370229  483855 kapi.go:59] client config for ha-579393: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/client.crt", KeyFile:"/home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/client.key", CAFile:"/home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1014 20:14:18.370618  483855 addons.go:238] Setting addon default-storageclass=true in "ha-579393"
	I1014 20:14:18.370669  483855 host.go:66] Checking if "ha-579393" exists ...
	I1014 20:14:18.371116  483855 cli_runner.go:164] Run: docker container inspect ha-579393 --format={{.State.Status}}
	I1014 20:14:18.372806  483855 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 20:14:18.374593  483855 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 20:14:18.374613  483855 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1014 20:14:18.374661  483855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:14:18.394050  483855 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1014 20:14:18.394079  483855 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1014 20:14:18.394142  483855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:14:18.400808  483855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:14:18.415928  483855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:14:18.464479  483855 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 20:14:18.478392  483855 node_ready.go:35] waiting up to 6m0s for node "ha-579393" to be "Ready" ...
	I1014 20:14:18.511484  483855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 20:14:18.526970  483855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:14:18.569362  483855 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:18.569451  483855 retry.go:31] will retry after 139.387299ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:14:18.586526  483855 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:18.586563  483855 retry.go:31] will retry after 373.05987ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:18.709958  483855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1014 20:14:18.765681  483855 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:18.765716  483855 retry.go:31] will retry after 437.429458ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:18.960052  483855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:14:19.015690  483855 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:19.015732  483855 retry.go:31] will retry after 493.852226ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:19.204088  483855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1014 20:14:19.257288  483855 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:19.257326  483855 retry.go:31] will retry after 639.980295ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:19.510433  483855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:14:19.566057  483855 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:19.566095  483855 retry.go:31] will retry after 314.039838ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:19.880614  483855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1014 20:14:19.898421  483855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1014 20:14:19.942792  483855 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:19.942837  483855 retry.go:31] will retry after 998.489046ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:14:19.959292  483855 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:19.959332  483855 retry.go:31] will retry after 630.832334ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:14:20.480037  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:14:20.591264  483855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1014 20:14:20.646581  483855 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:20.646618  483855 retry.go:31] will retry after 1.015213679s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:20.942176  483855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:14:20.995773  483855 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:20.995805  483855 retry.go:31] will retry after 1.667312943s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:21.662122  483855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1014 20:14:21.716333  483855 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:21.716374  483855 retry.go:31] will retry after 2.064978127s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:22.663519  483855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:14:22.718117  483855 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:22.718148  483855 retry.go:31] will retry after 1.187936777s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:14:22.979048  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:14:23.781666  483855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1014 20:14:23.836207  483855 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:23.836236  483855 retry.go:31] will retry after 4.211845068s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:23.906464  483855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:14:23.962253  483855 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:23.962286  483855 retry.go:31] will retry after 1.446389172s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:14:24.979293  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:14:25.408845  483855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:14:25.464280  483855 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:25.464314  483855 retry.go:31] will retry after 4.913115671s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:14:26.979811  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:14:28.048551  483855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1014 20:14:28.103870  483855 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:28.103926  483855 retry.go:31] will retry after 5.848016942s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:14:29.479152  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:14:30.377796  483855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:14:30.434283  483855 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:30.434318  483855 retry.go:31] will retry after 3.766557474s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:14:31.479517  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:14:33.953096  483855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1014 20:14:33.979303  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:14:34.008898  483855 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:34.008931  483855 retry.go:31] will retry after 9.465931342s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:34.201242  483855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:14:34.257447  483855 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:34.257479  483855 retry.go:31] will retry after 6.854944728s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:14:35.979488  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:14:37.979541  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:14:40.479409  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:14:41.113186  483855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:14:41.169845  483855 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:41.169886  483855 retry.go:31] will retry after 7.326807796s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:14:42.480113  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:14:43.475406  483855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1014 20:14:43.532885  483855 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:43.532922  483855 retry.go:31] will retry after 5.727455615s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:14:44.979266  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:14:47.479387  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:14:48.497090  483855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:14:48.552250  483855 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:48.552292  483855 retry.go:31] will retry after 19.686847261s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:49.260622  483855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1014 20:14:49.315681  483855 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:49.315717  483855 retry.go:31] will retry after 10.36859919s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:14:49.479476  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:14:51.479855  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:14:53.979220  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:14:55.979988  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:14:58.479745  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:14:59.685187  483855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1014 20:14:59.739249  483855 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:59.739293  483855 retry.go:31] will retry after 17.039426961s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:15:00.979569  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:15:02.980017  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:15:05.479408  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:15:07.479974  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:15:08.239444  483855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:15:08.295601  483855 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:15:08.295641  483855 retry.go:31] will retry after 35.811237481s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:15:09.979273  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:15:12.479310  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:15:14.480040  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:15:16.779044  483855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1014 20:15:16.836669  483855 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:15:16.836718  483855 retry.go:31] will retry after 20.079248911s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:15:16.979513  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:15:19.479421  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:15:21.979452  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:15:24.479358  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:15:26.979383  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:15:29.479315  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:15:31.979207  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:15:33.979957  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:15:36.479122  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:15:36.916703  483855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1014 20:15:36.972468  483855 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:15:36.972659  483855 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1014 20:15:38.479432  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:15:40.479519  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:15:42.979247  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:15:44.107661  483855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:15:44.164144  483855 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:15:44.164298  483855 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1014 20:15:44.166449  483855 out.go:179] * Enabled addons: 
	I1014 20:15:44.168164  483855 addons.go:514] duration metric: took 1m25.81898405s for enable addons: enabled=[]
	W1014 20:15:45.479183  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:15:47.479310  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:15:49.479858  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:15:51.979311  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:15:54.479230  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:15:56.479720  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:15:58.979411  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:16:00.979559  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:16:02.979849  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:16:05.479140  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:16:07.479361  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:16:09.979382  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:16:11.979429  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:16:14.479378  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:16:16.479725  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:16:18.480011  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:16:20.980035  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:16:23.479551  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:16:25.979494  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:16:28.479551  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:16:30.979602  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:16:33.479461  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:16:35.979497  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:16:38.479632  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:16:40.979546  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:16:43.479475  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:16:45.979520  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:16:48.479620  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:16:50.979569  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:16:53.479264  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:16:55.480008  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:16:57.979251  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:17:00.479382  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:17:02.979448  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:17:05.479351  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:17:07.479918  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:17:09.979503  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:17:12.479497  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:17:14.979391  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:17:17.479309  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:17:19.979174  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:17:21.979970  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:17:24.479629  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:17:26.979621  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:17:29.479384  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:17:31.479472  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:17:33.979336  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:17:36.479209  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:17:38.479928  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:17:40.979700  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:17:43.479435  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:17:45.480477  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:17:47.979204  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:17:49.979875  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:17:51.979997  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:17:54.479887  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:17:56.979877  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:17:59.480077  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:18:01.979036  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:18:03.979683  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:18:06.479668  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:18:08.479831  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:18:10.979604  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:18:13.479302  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:18:15.479879  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:18:17.979308  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:18:20.479378  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:18:22.979289  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:18:24.979637  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:18:27.479437  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:18:29.479998  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:18:31.979096  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:18:33.979838  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:18:36.479742  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:18:38.979613  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:18:41.479436  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:18:43.979207  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:18:46.479173  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:18:48.479845  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:18:50.979809  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:18:53.479746  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:18:55.979859  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:18:58.479881  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:19:00.979573  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:19:03.479418  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:19:05.479980  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:19:07.979038  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:19:09.979338  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:19:11.979815  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:19:14.479700  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:19:16.979525  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:19:19.479424  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:19:21.979395  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:19:24.479442  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:19:26.479811  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:19:28.979862  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:19:31.479976  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:19:33.979862  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:19:36.479814  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:19:38.979789  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:19:41.479881  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:19:43.979978  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:19:46.479123  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:19:48.479924  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:19:50.979839  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:19:53.479871  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:19:55.979681  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:19:58.479612  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:20:00.979818  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:20:03.479734  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:20:05.979638  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:20:08.479613  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:20:10.979828  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:20:13.479594  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:20:15.979749  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:20:18.479349  483855 node_ready.go:38] duration metric: took 6m0.000903914s for node "ha-579393" to be "Ready" ...
	I1014 20:20:18.482302  483855 out.go:203] 
	W1014 20:20:18.483783  483855 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1014 20:20:18.483798  483855 out.go:285] * 
	W1014 20:20:18.485408  483855 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 20:20:18.486562  483855 out.go:203] 
	
	
	==> CRI-O <==
	Oct 14 20:20:06 ha-579393 crio[520]: time="2025-10-14T20:20:06.277592775Z" level=info msg="createCtr: removing container 2ddbcf3bf65e4a50a572194ced69dab930496fe71ea3d9bd7f4e0baf87705ed9" id=d904107d-6ea7-4a52-be2b-f49fa941280e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:20:06 ha-579393 crio[520]: time="2025-10-14T20:20:06.277634288Z" level=info msg="createCtr: deleting container 2ddbcf3bf65e4a50a572194ced69dab930496fe71ea3d9bd7f4e0baf87705ed9 from storage" id=d904107d-6ea7-4a52-be2b-f49fa941280e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:20:06 ha-579393 crio[520]: time="2025-10-14T20:20:06.279882819Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-ha-579393_kube-system_06869f9c490a5fffb50940ac23939d18_0" id=d904107d-6ea7-4a52-be2b-f49fa941280e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:20:08 ha-579393 crio[520]: time="2025-10-14T20:20:08.252896936Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=c5a0f859-3d1f-49d4-8a5a-76fd53cd6780 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 20:20:08 ha-579393 crio[520]: time="2025-10-14T20:20:08.254078548Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=6ebcd093-0f0a-477c-b62e-ec91661c4cce name=/runtime.v1.ImageService/ImageStatus
	Oct 14 20:20:08 ha-579393 crio[520]: time="2025-10-14T20:20:08.255128114Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-579393/kube-controller-manager" id=513afa1c-1126-4326-ad68-5c293a663444 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:20:08 ha-579393 crio[520]: time="2025-10-14T20:20:08.255396391Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:20:08 ha-579393 crio[520]: time="2025-10-14T20:20:08.25880367Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:20:08 ha-579393 crio[520]: time="2025-10-14T20:20:08.259285277Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:20:08 ha-579393 crio[520]: time="2025-10-14T20:20:08.276886096Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=513afa1c-1126-4326-ad68-5c293a663444 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:20:08 ha-579393 crio[520]: time="2025-10-14T20:20:08.278333556Z" level=info msg="createCtr: deleting container ID 7e38648deb43c60d7d2c6ac4e87fdebd84d91dcacb13641cb9b3255a74e34a98 from idIndex" id=513afa1c-1126-4326-ad68-5c293a663444 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:20:08 ha-579393 crio[520]: time="2025-10-14T20:20:08.278371707Z" level=info msg="createCtr: removing container 7e38648deb43c60d7d2c6ac4e87fdebd84d91dcacb13641cb9b3255a74e34a98" id=513afa1c-1126-4326-ad68-5c293a663444 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:20:08 ha-579393 crio[520]: time="2025-10-14T20:20:08.27840453Z" level=info msg="createCtr: deleting container 7e38648deb43c60d7d2c6ac4e87fdebd84d91dcacb13641cb9b3255a74e34a98 from storage" id=513afa1c-1126-4326-ad68-5c293a663444 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:20:08 ha-579393 crio[520]: time="2025-10-14T20:20:08.280672572Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-579393_kube-system_514451ea1eb9e52e24cc36daace2ea4a_0" id=513afa1c-1126-4326-ad68-5c293a663444 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:20:16 ha-579393 crio[520]: time="2025-10-14T20:20:16.253384284Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=805ec65f-70bf-4dac-8414-52e7ff56d287 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 20:20:16 ha-579393 crio[520]: time="2025-10-14T20:20:16.254364956Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=63ffa242-3b0a-4252-a5ff-a7c93cac0fdc name=/runtime.v1.ImageService/ImageStatus
	Oct 14 20:20:16 ha-579393 crio[520]: time="2025-10-14T20:20:16.255342565Z" level=info msg="Creating container: kube-system/kube-scheduler-ha-579393/kube-scheduler" id=1b9eb083-baa0-458c-ba47-250d8c641411 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:20:16 ha-579393 crio[520]: time="2025-10-14T20:20:16.255588588Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:20:16 ha-579393 crio[520]: time="2025-10-14T20:20:16.258971081Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:20:16 ha-579393 crio[520]: time="2025-10-14T20:20:16.259384252Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:20:16 ha-579393 crio[520]: time="2025-10-14T20:20:16.275338162Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=1b9eb083-baa0-458c-ba47-250d8c641411 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:20:16 ha-579393 crio[520]: time="2025-10-14T20:20:16.276802395Z" level=info msg="createCtr: deleting container ID 8ee354b412f73742576827c6e1d2c3fd56cfde09e191d14f7ac64584458e38ad from idIndex" id=1b9eb083-baa0-458c-ba47-250d8c641411 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:20:16 ha-579393 crio[520]: time="2025-10-14T20:20:16.276840127Z" level=info msg="createCtr: removing container 8ee354b412f73742576827c6e1d2c3fd56cfde09e191d14f7ac64584458e38ad" id=1b9eb083-baa0-458c-ba47-250d8c641411 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:20:16 ha-579393 crio[520]: time="2025-10-14T20:20:16.276882675Z" level=info msg="createCtr: deleting container 8ee354b412f73742576827c6e1d2c3fd56cfde09e191d14f7ac64584458e38ad from storage" id=1b9eb083-baa0-458c-ba47-250d8c641411 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:20:16 ha-579393 crio[520]: time="2025-10-14T20:20:16.279005008Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-ha-579393_kube-system_8c15ab9dd5834e64ae44874faddf585d_0" id=1b9eb083-baa0-458c-ba47-250d8c641411 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 20:20:19.504218    2011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:20:19.504879    2011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:20:19.506441    2011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:20:19.506889    2011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:20:19.508477    2011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 91 42 a8 46 f5 08 06
	[ +33.952077] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff aa 17 60 38 7c 63 08 06
	[  +0.961658] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ba 77 81 98 05 a1 08 06
	[  +0.043334] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 8a 2f 57 55 4c d7 08 06
	[  +4.681081] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f6 ac 51 a4 b2 5a 08 06
	[Oct14 19:04] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e6 d1 de e1 85 25 08 06
	[  +0.899784] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ae f2 fe 51 3d 95 08 06
	[  +0.040749] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 82 c6 36 89 b8 4a 08 06
	[  +4.162603] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c6 62 d5 5c 0e ef 08 06
	[ +30.801371] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 06 ae e5 28 c7 39 08 06
	[  +0.817897] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ae 5e 5e 90 62 1a 08 06
	[  +0.046959] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 96 8f 1b bd b9 9f 08 06
	[Oct14 19:05] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 56 59 c3 7c db c4 08 06
	
	
	==> kernel <==
	 20:20:19 up  3:02,  0 user,  load average: 0.05, 0.09, 0.35
	Linux ha-579393 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 14 20:20:06 ha-579393 kubelet[672]:  > logger="UnhandledError"
	Oct 14 20:20:06 ha-579393 kubelet[672]: E1014 20:20:06.280303     672 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-ha-579393" podUID="06869f9c490a5fffb50940ac23939d18"
	Oct 14 20:20:07 ha-579393 kubelet[672]: E1014 20:20:07.102527     672 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-579393.186e74bc7ca63a28  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-579393,UID:ha-579393,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-579393 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-579393,},FirstTimestamp:2025-10-14 20:14:17.242384936 +0000 UTC m=+0.077696826,LastTimestamp:2025-10-14 20:14:17.242384936 +0000 UTC m=+0.077696826,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-579393,}"
	Oct 14 20:20:07 ha-579393 kubelet[672]: E1014 20:20:07.269611     672 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-579393\" not found"
	Oct 14 20:20:08 ha-579393 kubelet[672]: E1014 20:20:08.252368     672 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-579393\" not found" node="ha-579393"
	Oct 14 20:20:08 ha-579393 kubelet[672]: E1014 20:20:08.281012     672 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 14 20:20:08 ha-579393 kubelet[672]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 14 20:20:08 ha-579393 kubelet[672]:  > podSandboxID="f180c4075f618508cee2088a1ba338b9bc1be40472f118acf4ce26f50c5f9c95"
	Oct 14 20:20:08 ha-579393 kubelet[672]: E1014 20:20:08.281129     672 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 14 20:20:08 ha-579393 kubelet[672]:         container kube-controller-manager start failed in pod kube-controller-manager-ha-579393_kube-system(514451ea1eb9e52e24cc36daace2ea4a): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 14 20:20:08 ha-579393 kubelet[672]:  > logger="UnhandledError"
	Oct 14 20:20:08 ha-579393 kubelet[672]: E1014 20:20:08.281170     672 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-579393" podUID="514451ea1eb9e52e24cc36daace2ea4a"
	Oct 14 20:20:12 ha-579393 kubelet[672]: E1014 20:20:12.889358     672 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-579393?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 14 20:20:13 ha-579393 kubelet[672]: I1014 20:20:13.069903     672 kubelet_node_status.go:75] "Attempting to register node" node="ha-579393"
	Oct 14 20:20:13 ha-579393 kubelet[672]: E1014 20:20:13.070297     672 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-579393"
	Oct 14 20:20:16 ha-579393 kubelet[672]: E1014 20:20:16.252893     672 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-579393\" not found" node="ha-579393"
	Oct 14 20:20:16 ha-579393 kubelet[672]: E1014 20:20:16.279332     672 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 14 20:20:16 ha-579393 kubelet[672]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 14 20:20:16 ha-579393 kubelet[672]:  > podSandboxID="b787fe1e5440a0cd7a12bb9c0badefd1b340ddc1fcd29775deacae90954fa071"
	Oct 14 20:20:16 ha-579393 kubelet[672]: E1014 20:20:16.279436     672 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 14 20:20:16 ha-579393 kubelet[672]:         container kube-scheduler start failed in pod kube-scheduler-ha-579393_kube-system(8c15ab9dd5834e64ae44874faddf585d): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 14 20:20:16 ha-579393 kubelet[672]:  > logger="UnhandledError"
	Oct 14 20:20:16 ha-579393 kubelet[672]: E1014 20:20:16.279469     672 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-ha-579393" podUID="8c15ab9dd5834e64ae44874faddf585d"
	Oct 14 20:20:17 ha-579393 kubelet[672]: E1014 20:20:17.103846     672 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-579393.186e74bc7ca63a28  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-579393,UID:ha-579393,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-579393 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-579393,},FirstTimestamp:2025-10-14 20:14:17.242384936 +0000 UTC m=+0.077696826,LastTimestamp:2025-10-14 20:14:17.242384936 +0000 UTC m=+0.077696826,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-579393,}"
	Oct 14 20:20:17 ha-579393 kubelet[672]: E1014 20:20:17.269989     672 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-579393\" not found"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-579393 -n ha-579393
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-579393 -n ha-579393: exit status 2 (310.925686ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "ha-579393" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (370.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (1.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-579393 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-579393 node delete m03 --alsologtostderr -v 5: exit status 103 (258.506789ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-579393 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p ha-579393"

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 20:20:19.963402  488338 out.go:360] Setting OutFile to fd 1 ...
	I1014 20:20:19.963721  488338 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:20:19.963732  488338 out.go:374] Setting ErrFile to fd 2...
	I1014 20:20:19.963737  488338 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:20:19.963943  488338 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-413763/.minikube/bin
	I1014 20:20:19.964246  488338 mustload.go:65] Loading cluster: ha-579393
	I1014 20:20:19.964603  488338 config.go:182] Loaded profile config "ha-579393": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:20:19.965007  488338 cli_runner.go:164] Run: docker container inspect ha-579393 --format={{.State.Status}}
	I1014 20:20:19.983621  488338 host.go:66] Checking if "ha-579393" exists ...
	I1014 20:20:19.983970  488338 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 20:20:20.043013  488338 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-14 20:20:20.032701969 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1014 20:20:20.043135  488338 api_server.go:166] Checking apiserver status ...
	I1014 20:20:20.043181  488338 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 20:20:20.043222  488338 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:20:20.061102  488338 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	W1014 20:20:20.168019  488338 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1014 20:20:20.170021  488338 out.go:179] * The control-plane node ha-579393 apiserver is not running: (state=Stopped)
	I1014 20:20:20.171907  488338 out.go:179]   To start a cluster, run: "minikube start -p ha-579393"

                                                
                                                
** /stderr **
ha_test.go:491: node delete returned an error. args "out/minikube-linux-amd64 -p ha-579393 node delete m03 --alsologtostderr -v 5": exit status 103
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-579393 status --alsologtostderr -v 5
ha_test.go:495: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-579393 status --alsologtostderr -v 5: exit status 2 (307.447017ms)

                                                
                                                
-- stdout --
	ha-579393
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Configured
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 20:20:20.224339  488433 out.go:360] Setting OutFile to fd 1 ...
	I1014 20:20:20.224641  488433 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:20:20.224660  488433 out.go:374] Setting ErrFile to fd 2...
	I1014 20:20:20.224665  488433 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:20:20.224873  488433 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-413763/.minikube/bin
	I1014 20:20:20.225050  488433 out.go:368] Setting JSON to false
	I1014 20:20:20.225079  488433 mustload.go:65] Loading cluster: ha-579393
	I1014 20:20:20.225188  488433 notify.go:220] Checking for updates...
	I1014 20:20:20.225420  488433 config.go:182] Loaded profile config "ha-579393": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:20:20.225437  488433 status.go:174] checking status of ha-579393 ...
	I1014 20:20:20.225899  488433 cli_runner.go:164] Run: docker container inspect ha-579393 --format={{.State.Status}}
	I1014 20:20:20.244462  488433 status.go:371] ha-579393 host status = "Running" (err=<nil>)
	I1014 20:20:20.244494  488433 host.go:66] Checking if "ha-579393" exists ...
	I1014 20:20:20.244840  488433 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-579393
	I1014 20:20:20.265128  488433 host.go:66] Checking if "ha-579393" exists ...
	I1014 20:20:20.265563  488433 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1014 20:20:20.265612  488433 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:20:20.285818  488433 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:20:20.387225  488433 ssh_runner.go:195] Run: systemctl --version
	I1014 20:20:20.393900  488433 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 20:20:20.407590  488433 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 20:20:20.468534  488433 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-14 20:20:20.457902258 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1014 20:20:20.469128  488433 kubeconfig.go:125] found "ha-579393" server: "https://192.168.49.2:8443"
	I1014 20:20:20.469163  488433 api_server.go:166] Checking apiserver status ...
	I1014 20:20:20.469210  488433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1014 20:20:20.479799  488433 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1014 20:20:20.479829  488433 status.go:463] ha-579393 apiserver status = Running (err=<nil>)
	I1014 20:20:20.479843  488433 status.go:176] ha-579393 status: &{Name:ha-579393 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:497: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-579393 status --alsologtostderr -v 5" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-579393
helpers_test.go:243: (dbg) docker inspect ha-579393:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2",
	        "Created": "2025-10-14T20:03:22.416166993Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 484051,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-14T20:14:11.163807444Z",
	            "FinishedAt": "2025-10-14T20:14:10.012215428Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2/hostname",
	        "HostsPath": "/var/lib/docker/containers/e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2/hosts",
	        "LogPath": "/var/lib/docker/containers/e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2/e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2-json.log",
	        "Name": "/ha-579393",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-579393:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-579393",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2",
	                "LowerDir": "/var/lib/docker/overlay2/f2862f7f555e5163c5fb738e59e646f87e5b6f6dc38621bbd5cf8a3ccf2dc40d-init/diff:/var/lib/docker/overlay2/51cca15559205c73df9571f03495ed8a29f085405673af5fef2c1ba7c695d8af/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f2862f7f555e5163c5fb738e59e646f87e5b6f6dc38621bbd5cf8a3ccf2dc40d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f2862f7f555e5163c5fb738e59e646f87e5b6f6dc38621bbd5cf8a3ccf2dc40d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f2862f7f555e5163c5fb738e59e646f87e5b6f6dc38621bbd5cf8a3ccf2dc40d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ha-579393",
	                "Source": "/var/lib/docker/volumes/ha-579393/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-579393",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-579393",
	                "name.minikube.sigs.k8s.io": "ha-579393",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8e28d83e308e6af894f39de197ae2094de94e5854c96689c87348eaa90862b2d",
	            "SandboxKey": "/var/run/docker/netns/8e28d83e308e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32908"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32909"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32912"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32910"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32911"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-579393": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ee:4c:9d:2b:25:15",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d686e445c44d566d468791372c74214f3cfc87adaf2fbedd68822ff7d3ce8955",
	                    "EndpointID": "a2ff963b7135d73b1a96b4db82df9afe9bb179109a039548974da4d367372ba7",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-579393",
	                        "e999454f4483"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-579393 -n ha-579393
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-579393 -n ha-579393: exit status 2 (313.626106ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/DeleteSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-579393 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/DeleteSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                    ARGS                                     │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ kubectl │ ha-579393 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml            │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl │ ha-579393 kubectl -- rollout status deployment/busybox                      │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:12 UTC │                     │
	│ kubectl │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:12 UTC │                     │
	│ kubectl │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:12 UTC │                     │
	│ kubectl │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ kubectl │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'       │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ kubectl │ ha-579393 kubectl -- exec  -- nslookup kubernetes.io                        │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ kubectl │ ha-579393 kubectl -- exec  -- nslookup kubernetes.default                   │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ kubectl │ ha-579393 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ kubectl │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'       │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ node    │ ha-579393 node add --alsologtostderr -v 5                                   │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ node    │ ha-579393 node stop m02 --alsologtostderr -v 5                              │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ node    │ ha-579393 node start m02 --alsologtostderr -v 5                             │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ node    │ ha-579393 node list --alsologtostderr -v 5                                  │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:14 UTC │                     │
	│ stop    │ ha-579393 stop --alsologtostderr -v 5                                       │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:14 UTC │ 14 Oct 25 20:14 UTC │
	│ start   │ ha-579393 start --wait true --alsologtostderr -v 5                          │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:14 UTC │                     │
	│ node    │ ha-579393 node list --alsologtostderr -v 5                                  │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:20 UTC │                     │
	│ node    │ ha-579393 node delete m03 --alsologtostderr -v 5                            │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:20 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/14 20:14:10
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1014 20:14:10.920500  483855 out.go:360] Setting OutFile to fd 1 ...
	I1014 20:14:10.920744  483855 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:14:10.920765  483855 out.go:374] Setting ErrFile to fd 2...
	I1014 20:14:10.920770  483855 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:14:10.920950  483855 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-413763/.minikube/bin
	I1014 20:14:10.921400  483855 out.go:368] Setting JSON to false
	I1014 20:14:10.922307  483855 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":10597,"bootTime":1760462254,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1014 20:14:10.922423  483855 start.go:141] virtualization: kvm guest
	I1014 20:14:10.924678  483855 out.go:179] * [ha-579393] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1014 20:14:10.925922  483855 out.go:179]   - MINIKUBE_LOCATION=21409
	I1014 20:14:10.925932  483855 notify.go:220] Checking for updates...
	I1014 20:14:10.928150  483855 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 20:14:10.929578  483855 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-413763/kubeconfig
	I1014 20:14:10.931110  483855 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-413763/.minikube
	I1014 20:14:10.932372  483855 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1014 20:14:10.933593  483855 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 20:14:10.935251  483855 config.go:182] Loaded profile config "ha-579393": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:14:10.935376  483855 driver.go:421] Setting default libvirt URI to qemu:///system
	I1014 20:14:10.960161  483855 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1014 20:14:10.960301  483855 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 20:14:11.020772  483855 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-14 20:14:11.009250952 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1014 20:14:11.020894  483855 docker.go:318] overlay module found
	I1014 20:14:11.022935  483855 out.go:179] * Using the docker driver based on existing profile
	I1014 20:14:11.024198  483855 start.go:305] selected driver: docker
	I1014 20:14:11.024214  483855 start.go:925] validating driver "docker" against &{Name:ha-579393 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-579393 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 20:14:11.024304  483855 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 20:14:11.024438  483855 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 20:14:11.086893  483855 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-14 20:14:11.076411866 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1014 20:14:11.087678  483855 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 20:14:11.087721  483855 cni.go:84] Creating CNI manager for ""
	I1014 20:14:11.087800  483855 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1014 20:14:11.087868  483855 start.go:349] cluster config:
	{Name:ha-579393 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-579393 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GP
Us: AutoPauseInterval:1m0s}
	I1014 20:14:11.090005  483855 out.go:179] * Starting "ha-579393" primary control-plane node in "ha-579393" cluster
	I1014 20:14:11.091314  483855 cache.go:123] Beginning downloading kic base image for docker with crio
	I1014 20:14:11.092803  483855 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1014 20:14:11.094111  483855 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 20:14:11.094148  483855 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-413763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1014 20:14:11.094156  483855 cache.go:58] Caching tarball of preloaded images
	I1014 20:14:11.094218  483855 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1014 20:14:11.094241  483855 preload.go:233] Found /home/jenkins/minikube-integration/21409-413763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1014 20:14:11.094277  483855 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1014 20:14:11.094382  483855 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/config.json ...
	I1014 20:14:11.115783  483855 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1014 20:14:11.115808  483855 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1014 20:14:11.115828  483855 cache.go:232] Successfully downloaded all kic artifacts
	I1014 20:14:11.115855  483855 start.go:360] acquireMachinesLock for ha-579393: {Name:mk1f05a584fb3418ecbc78031a467c0e06c7a892 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 20:14:11.115928  483855 start.go:364] duration metric: took 47.72µs to acquireMachinesLock for "ha-579393"
	I1014 20:14:11.115949  483855 start.go:96] Skipping create...Using existing machine configuration
	I1014 20:14:11.115957  483855 fix.go:54] fixHost starting: 
	I1014 20:14:11.116246  483855 cli_runner.go:164] Run: docker container inspect ha-579393 --format={{.State.Status}}
	I1014 20:14:11.133733  483855 fix.go:112] recreateIfNeeded on ha-579393: state=Stopped err=<nil>
	W1014 20:14:11.133784  483855 fix.go:138] unexpected machine state, will restart: <nil>
	I1014 20:14:11.135788  483855 out.go:252] * Restarting existing docker container for "ha-579393" ...
	I1014 20:14:11.135872  483855 cli_runner.go:164] Run: docker start ha-579393
	I1014 20:14:11.385558  483855 cli_runner.go:164] Run: docker container inspect ha-579393 --format={{.State.Status}}
	I1014 20:14:11.406160  483855 kic.go:430] container "ha-579393" state is running.
	I1014 20:14:11.406595  483855 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-579393
	I1014 20:14:11.427601  483855 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/config.json ...
	I1014 20:14:11.427957  483855 machine.go:93] provisionDockerMachine start ...
	I1014 20:14:11.428045  483855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:14:11.447692  483855 main.go:141] libmachine: Using SSH client type: native
	I1014 20:14:11.447993  483855 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32908 <nil> <nil>}
	I1014 20:14:11.448015  483855 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 20:14:11.448627  483855 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:43868->127.0.0.1:32908: read: connection reset by peer
	I1014 20:14:14.598160  483855 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-579393
	
	I1014 20:14:14.598192  483855 ubuntu.go:182] provisioning hostname "ha-579393"
	I1014 20:14:14.598246  483855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:14:14.616421  483855 main.go:141] libmachine: Using SSH client type: native
	I1014 20:14:14.616660  483855 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32908 <nil> <nil>}
	I1014 20:14:14.616677  483855 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-579393 && echo "ha-579393" | sudo tee /etc/hostname
	I1014 20:14:14.772817  483855 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-579393
	
	I1014 20:14:14.772902  483855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:14:14.791583  483855 main.go:141] libmachine: Using SSH client type: native
	I1014 20:14:14.791976  483855 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32908 <nil> <nil>}
	I1014 20:14:14.792005  483855 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-579393' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-579393/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-579393' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 20:14:14.941113  483855 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 20:14:14.941153  483855 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-413763/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-413763/.minikube}
	I1014 20:14:14.941182  483855 ubuntu.go:190] setting up certificates
	I1014 20:14:14.941192  483855 provision.go:84] configureAuth start
	I1014 20:14:14.941248  483855 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-579393
	I1014 20:14:14.959540  483855 provision.go:143] copyHostCerts
	I1014 20:14:14.959581  483855 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem
	I1014 20:14:14.959610  483855 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem, removing ...
	I1014 20:14:14.959626  483855 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem
	I1014 20:14:14.959736  483855 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem (1078 bytes)
	I1014 20:14:14.959861  483855 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem
	I1014 20:14:14.959885  483855 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem, removing ...
	I1014 20:14:14.959890  483855 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem
	I1014 20:14:14.959924  483855 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem (1123 bytes)
	I1014 20:14:14.959979  483855 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem
	I1014 20:14:14.959996  483855 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem, removing ...
	I1014 20:14:14.960003  483855 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem
	I1014 20:14:14.960029  483855 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem (1675 bytes)
	I1014 20:14:14.960082  483855 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca-key.pem org=jenkins.ha-579393 san=[127.0.0.1 192.168.49.2 ha-579393 localhost minikube]
	I1014 20:14:15.029188  483855 provision.go:177] copyRemoteCerts
	I1014 20:14:15.029258  483855 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 20:14:15.029297  483855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:14:15.048357  483855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:14:15.153017  483855 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1014 20:14:15.153075  483855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 20:14:15.172076  483855 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1014 20:14:15.172147  483855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1014 20:14:15.191156  483855 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1014 20:14:15.191247  483855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1014 20:14:15.209919  483855 provision.go:87] duration metric: took 268.700795ms to configureAuth
	I1014 20:14:15.209952  483855 ubuntu.go:206] setting minikube options for container-runtime
	I1014 20:14:15.210139  483855 config.go:182] Loaded profile config "ha-579393": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:14:15.210238  483855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:14:15.228740  483855 main.go:141] libmachine: Using SSH client type: native
	I1014 20:14:15.229042  483855 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32908 <nil> <nil>}
	I1014 20:14:15.229063  483855 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 20:14:15.497697  483855 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 20:14:15.497742  483855 machine.go:96] duration metric: took 4.069763695s to provisionDockerMachine
	I1014 20:14:15.497775  483855 start.go:293] postStartSetup for "ha-579393" (driver="docker")
	I1014 20:14:15.497793  483855 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 20:14:15.497866  483855 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 20:14:15.497946  483855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:14:15.517186  483855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:14:15.621698  483855 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 20:14:15.625589  483855 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1014 20:14:15.625615  483855 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1014 20:14:15.625644  483855 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-413763/.minikube/addons for local assets ...
	I1014 20:14:15.625730  483855 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-413763/.minikube/files for local assets ...
	I1014 20:14:15.625831  483855 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem -> 4173732.pem in /etc/ssl/certs
	I1014 20:14:15.625845  483855 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem -> /etc/ssl/certs/4173732.pem
	I1014 20:14:15.625954  483855 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 20:14:15.634004  483855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem --> /etc/ssl/certs/4173732.pem (1708 bytes)
	I1014 20:14:15.652848  483855 start.go:296] duration metric: took 155.05253ms for postStartSetup
	I1014 20:14:15.652947  483855 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1014 20:14:15.652999  483855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:14:15.671676  483855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:14:15.772231  483855 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1014 20:14:15.777428  483855 fix.go:56] duration metric: took 4.661453251s for fixHost
	I1014 20:14:15.777461  483855 start.go:83] releasing machines lock for "ha-579393", held for 4.661520575s
	I1014 20:14:15.777540  483855 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-579393
	I1014 20:14:15.795367  483855 ssh_runner.go:195] Run: cat /version.json
	I1014 20:14:15.795414  483855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:14:15.795438  483855 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 20:14:15.795537  483855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:14:15.813628  483855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:14:15.814338  483855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:14:15.974436  483855 ssh_runner.go:195] Run: systemctl --version
	I1014 20:14:15.981351  483855 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 20:14:16.017956  483855 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 20:14:16.023150  483855 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 20:14:16.023222  483855 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 20:14:16.031654  483855 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1014 20:14:16.031679  483855 start.go:495] detecting cgroup driver to use...
	I1014 20:14:16.031717  483855 detect.go:190] detected "systemd" cgroup driver on host os
	I1014 20:14:16.031802  483855 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 20:14:16.048436  483855 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 20:14:16.061476  483855 docker.go:218] disabling cri-docker service (if available) ...
	I1014 20:14:16.061544  483855 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 20:14:16.076780  483855 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 20:14:16.090012  483855 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 20:14:16.170317  483855 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 20:14:16.252741  483855 docker.go:234] disabling docker service ...
	I1014 20:14:16.252834  483855 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 20:14:16.268133  483855 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 20:14:16.281337  483855 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 20:14:16.362683  483855 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 20:14:16.445975  483855 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 20:14:16.459439  483855 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 20:14:16.474704  483855 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1014 20:14:16.474792  483855 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:14:16.484201  483855 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1014 20:14:16.484258  483855 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:14:16.493514  483855 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:14:16.502774  483855 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:14:16.511809  483855 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 20:14:16.520391  483855 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:14:16.529310  483855 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:14:16.538091  483855 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:14:16.547967  483855 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 20:14:16.555555  483855 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 20:14:16.562993  483855 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 20:14:16.640477  483855 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 20:14:16.746213  483855 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 20:14:16.746283  483855 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 20:14:16.750894  483855 start.go:563] Will wait 60s for crictl version
	I1014 20:14:16.750948  483855 ssh_runner.go:195] Run: which crictl
	I1014 20:14:16.754907  483855 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1014 20:14:16.780375  483855 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1014 20:14:16.780469  483855 ssh_runner.go:195] Run: crio --version
	I1014 20:14:16.809161  483855 ssh_runner.go:195] Run: crio --version
	I1014 20:14:16.841558  483855 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1014 20:14:16.843303  483855 cli_runner.go:164] Run: docker network inspect ha-579393 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1014 20:14:16.860993  483855 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1014 20:14:16.865603  483855 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 20:14:16.876543  483855 kubeadm.go:883] updating cluster {Name:ha-579393 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-579393 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClien
tPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 20:14:16.876681  483855 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 20:14:16.876735  483855 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 20:14:16.910114  483855 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 20:14:16.910136  483855 crio.go:433] Images already preloaded, skipping extraction
	I1014 20:14:16.910188  483855 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 20:14:16.938328  483855 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 20:14:16.938351  483855 cache_images.go:85] Images are preloaded, skipping loading
	I1014 20:14:16.938359  483855 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1014 20:14:16.938454  483855 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-579393 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-579393 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 20:14:16.938514  483855 ssh_runner.go:195] Run: crio config
	I1014 20:14:16.986141  483855 cni.go:84] Creating CNI manager for ""
	I1014 20:14:16.986163  483855 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1014 20:14:16.986185  483855 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1014 20:14:16.986206  483855 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-579393 NodeName:ha-579393 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1014 20:14:16.986342  483855 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-579393"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 20:14:16.986402  483855 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1014 20:14:16.994771  483855 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 20:14:16.994843  483855 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1014 20:14:17.002873  483855 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1014 20:14:17.016475  483855 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 20:14:17.030150  483855 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1014 20:14:17.044797  483855 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1014 20:14:17.048950  483855 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 20:14:17.059592  483855 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 20:14:17.138128  483855 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 20:14:17.162331  483855 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393 for IP: 192.168.49.2
	I1014 20:14:17.162363  483855 certs.go:195] generating shared ca certs ...
	I1014 20:14:17.162382  483855 certs.go:227] acquiring lock for ca certs: {Name:mk332cc83f1e1747534fc81e896bed94f24941d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:14:17.162522  483855 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.key
	I1014 20:14:17.162565  483855 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.key
	I1014 20:14:17.162572  483855 certs.go:257] generating profile certs ...
	I1014 20:14:17.162658  483855 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/client.key
	I1014 20:14:17.162681  483855 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key.d4ee92c1
	I1014 20:14:17.162721  483855 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt.d4ee92c1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1014 20:14:17.668666  483855 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt.d4ee92c1 ...
	I1014 20:14:17.668699  483855 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt.d4ee92c1: {Name:mk8a02e133127c09314986455d50a58a5753fa21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:14:17.668891  483855 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key.d4ee92c1 ...
	I1014 20:14:17.668912  483855 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key.d4ee92c1: {Name:mk7efe25de40b153ba4b5ad91ea2ae7247892281 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:14:17.669022  483855 certs.go:382] copying /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt.d4ee92c1 -> /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt
	I1014 20:14:17.669169  483855 certs.go:386] copying /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key.d4ee92c1 -> /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key
	I1014 20:14:17.669333  483855 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.key
	I1014 20:14:17.669353  483855 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1014 20:14:17.669372  483855 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1014 20:14:17.669388  483855 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1014 20:14:17.669407  483855 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1014 20:14:17.669426  483855 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1014 20:14:17.669444  483855 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1014 20:14:17.669462  483855 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1014 20:14:17.669483  483855 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1014 20:14:17.669558  483855 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/417373.pem (1338 bytes)
	W1014 20:14:17.669604  483855 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-413763/.minikube/certs/417373_empty.pem, impossibly tiny 0 bytes
	I1014 20:14:17.669618  483855 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca-key.pem (1675 bytes)
	I1014 20:14:17.669651  483855 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem (1078 bytes)
	I1014 20:14:17.669682  483855 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem (1123 bytes)
	I1014 20:14:17.669720  483855 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem (1675 bytes)
	I1014 20:14:17.669797  483855 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem (1708 bytes)
	I1014 20:14:17.669838  483855 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:14:17.669858  483855 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/417373.pem -> /usr/share/ca-certificates/417373.pem
	I1014 20:14:17.669877  483855 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem -> /usr/share/ca-certificates/4173732.pem
	I1014 20:14:17.670422  483855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 20:14:17.694853  483855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1014 20:14:17.714851  483855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 20:14:17.735389  483855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1014 20:14:17.753540  483855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1014 20:14:17.771256  483855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1014 20:14:17.788532  483855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 20:14:17.806351  483855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1014 20:14:17.824287  483855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 20:14:17.842307  483855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/certs/417373.pem --> /usr/share/ca-certificates/417373.pem (1338 bytes)
	I1014 20:14:17.860337  483855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem --> /usr/share/ca-certificates/4173732.pem (1708 bytes)
	I1014 20:14:17.877985  483855 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 20:14:17.891323  483855 ssh_runner.go:195] Run: openssl version
	I1014 20:14:17.898069  483855 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 20:14:17.907081  483855 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:14:17.911053  483855 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 19:15 /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:14:17.911125  483855 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:14:17.945120  483855 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 20:14:17.953947  483855 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/417373.pem && ln -fs /usr/share/ca-certificates/417373.pem /etc/ssl/certs/417373.pem"
	I1014 20:14:17.962658  483855 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/417373.pem
	I1014 20:14:17.966544  483855 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 19:32 /usr/share/ca-certificates/417373.pem
	I1014 20:14:17.966600  483855 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/417373.pem
	I1014 20:14:18.000895  483855 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/417373.pem /etc/ssl/certs/51391683.0"
	I1014 20:14:18.009773  483855 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4173732.pem && ln -fs /usr/share/ca-certificates/4173732.pem /etc/ssl/certs/4173732.pem"
	I1014 20:14:18.018643  483855 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4173732.pem
	I1014 20:14:18.022518  483855 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 19:32 /usr/share/ca-certificates/4173732.pem
	I1014 20:14:18.022596  483855 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4173732.pem
	I1014 20:14:18.057289  483855 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4173732.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 20:14:18.065942  483855 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 20:14:18.070012  483855 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1014 20:14:18.104032  483855 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1014 20:14:18.138282  483855 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1014 20:14:18.173092  483855 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1014 20:14:18.207994  483855 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1014 20:14:18.243571  483855 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1014 20:14:18.292429  483855 kubeadm.go:400] StartCluster: {Name:ha-579393 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-579393 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPa
th: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 20:14:18.292512  483855 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 20:14:18.292580  483855 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 20:14:18.321112  483855 cri.go:89] found id: ""
	I1014 20:14:18.321184  483855 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 20:14:18.329855  483855 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1014 20:14:18.329881  483855 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1014 20:14:18.329939  483855 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1014 20:14:18.337735  483855 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1014 20:14:18.338246  483855 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-579393" does not appear in /home/jenkins/minikube-integration/21409-413763/kubeconfig
	I1014 20:14:18.338385  483855 kubeconfig.go:62] /home/jenkins/minikube-integration/21409-413763/kubeconfig needs updating (will repair): [kubeconfig missing "ha-579393" cluster setting kubeconfig missing "ha-579393" context setting]
	I1014 20:14:18.338768  483855 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/kubeconfig: {Name:mk8f52e50cf00a1f90024c40a36e49753857e33b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:14:18.339441  483855 kapi.go:59] client config for ha-579393: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/client.crt", KeyFile:"/home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/client.key", CAFile:"/home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1014 20:14:18.340024  483855 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1014 20:14:18.340045  483855 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1014 20:14:18.340052  483855 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1014 20:14:18.340058  483855 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1014 20:14:18.340056  483855 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1014 20:14:18.340064  483855 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1014 20:14:18.340494  483855 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1014 20:14:18.348268  483855 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1014 20:14:18.348302  483855 kubeadm.go:601] duration metric: took 18.415642ms to restartPrimaryControlPlane
	I1014 20:14:18.348309  483855 kubeadm.go:402] duration metric: took 55.890314ms to StartCluster
	I1014 20:14:18.348323  483855 settings.go:142] acquiring lock: {Name:mke99e63954bc0385c76f9fa1a80091fa7740a6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:14:18.348383  483855 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21409-413763/kubeconfig
	I1014 20:14:18.348891  483855 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/kubeconfig: {Name:mk8f52e50cf00a1f90024c40a36e49753857e33b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:14:18.349121  483855 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 20:14:18.349176  483855 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1014 20:14:18.349275  483855 addons.go:69] Setting storage-provisioner=true in profile "ha-579393"
	I1014 20:14:18.349292  483855 addons.go:69] Setting default-storageclass=true in profile "ha-579393"
	I1014 20:14:18.349334  483855 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-579393"
	I1014 20:14:18.349340  483855 config.go:182] Loaded profile config "ha-579393": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:14:18.349299  483855 addons.go:238] Setting addon storage-provisioner=true in "ha-579393"
	I1014 20:14:18.349402  483855 host.go:66] Checking if "ha-579393" exists ...
	I1014 20:14:18.349612  483855 cli_runner.go:164] Run: docker container inspect ha-579393 --format={{.State.Status}}
	I1014 20:14:18.349733  483855 cli_runner.go:164] Run: docker container inspect ha-579393 --format={{.State.Status}}
	I1014 20:14:18.351949  483855 out.go:179] * Verifying Kubernetes components...
	I1014 20:14:18.353493  483855 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 20:14:18.370229  483855 kapi.go:59] client config for ha-579393: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/client.crt", KeyFile:"/home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/client.key", CAFile:"/home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1014 20:14:18.370618  483855 addons.go:238] Setting addon default-storageclass=true in "ha-579393"
	I1014 20:14:18.370669  483855 host.go:66] Checking if "ha-579393" exists ...
	I1014 20:14:18.371116  483855 cli_runner.go:164] Run: docker container inspect ha-579393 --format={{.State.Status}}
	I1014 20:14:18.372806  483855 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 20:14:18.374593  483855 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 20:14:18.374613  483855 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1014 20:14:18.374661  483855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:14:18.394050  483855 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1014 20:14:18.394079  483855 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1014 20:14:18.394142  483855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:14:18.400808  483855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:14:18.415928  483855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:14:18.464479  483855 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 20:14:18.478392  483855 node_ready.go:35] waiting up to 6m0s for node "ha-579393" to be "Ready" ...
	I1014 20:14:18.511484  483855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 20:14:18.526970  483855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:14:18.569362  483855 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:18.569451  483855 retry.go:31] will retry after 139.387299ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:14:18.586526  483855 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:18.586563  483855 retry.go:31] will retry after 373.05987ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:18.709958  483855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1014 20:14:18.765681  483855 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:18.765716  483855 retry.go:31] will retry after 437.429458ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:18.960052  483855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:14:19.015690  483855 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:19.015732  483855 retry.go:31] will retry after 493.852226ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:19.204088  483855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1014 20:14:19.257288  483855 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:19.257326  483855 retry.go:31] will retry after 639.980295ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:19.510433  483855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:14:19.566057  483855 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:19.566095  483855 retry.go:31] will retry after 314.039838ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:19.880614  483855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1014 20:14:19.898421  483855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1014 20:14:19.942792  483855 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:19.942837  483855 retry.go:31] will retry after 998.489046ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:14:19.959292  483855 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:19.959332  483855 retry.go:31] will retry after 630.832334ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:14:20.480037  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:14:20.591264  483855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1014 20:14:20.646581  483855 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:20.646618  483855 retry.go:31] will retry after 1.015213679s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:20.942176  483855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:14:20.995773  483855 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:20.995805  483855 retry.go:31] will retry after 1.667312943s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:21.662122  483855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1014 20:14:21.716333  483855 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:21.716374  483855 retry.go:31] will retry after 2.064978127s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:22.663519  483855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:14:22.718117  483855 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:22.718148  483855 retry.go:31] will retry after 1.187936777s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:14:22.979048  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:14:23.781666  483855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1014 20:14:23.836207  483855 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:23.836236  483855 retry.go:31] will retry after 4.211845068s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:23.906464  483855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:14:23.962253  483855 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:23.962286  483855 retry.go:31] will retry after 1.446389172s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:14:24.979293  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:14:25.408845  483855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:14:25.464280  483855 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:25.464314  483855 retry.go:31] will retry after 4.913115671s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:14:26.979811  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:14:28.048551  483855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1014 20:14:28.103870  483855 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:28.103926  483855 retry.go:31] will retry after 5.848016942s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:14:29.479152  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:14:30.377796  483855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:14:30.434283  483855 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:30.434318  483855 retry.go:31] will retry after 3.766557474s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:14:31.479517  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:14:33.953096  483855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1014 20:14:33.979303  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:14:34.008898  483855 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:34.008931  483855 retry.go:31] will retry after 9.465931342s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:34.201242  483855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:14:34.257447  483855 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:34.257479  483855 retry.go:31] will retry after 6.854944728s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:14:35.979488  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:14:37.979541  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:14:40.479409  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:14:41.113186  483855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:14:41.169845  483855 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:41.169886  483855 retry.go:31] will retry after 7.326807796s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:14:42.480113  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:14:43.475406  483855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1014 20:14:43.532885  483855 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:43.532922  483855 retry.go:31] will retry after 5.727455615s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:14:44.979266  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:14:47.479387  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:14:48.497090  483855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:14:48.552250  483855 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:48.552292  483855 retry.go:31] will retry after 19.686847261s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:49.260622  483855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1014 20:14:49.315681  483855 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:49.315717  483855 retry.go:31] will retry after 10.36859919s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:14:49.479476  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:14:51.479855  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:14:53.979220  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:14:55.979988  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:14:58.479745  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:14:59.685187  483855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1014 20:14:59.739249  483855 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:59.739293  483855 retry.go:31] will retry after 17.039426961s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:15:00.979569  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:15:02.980017  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:15:05.479408  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:15:07.479974  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:15:08.239444  483855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:15:08.295601  483855 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:15:08.295641  483855 retry.go:31] will retry after 35.811237481s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:15:09.979273  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:15:12.479310  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:15:14.480040  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:15:16.779044  483855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1014 20:15:16.836669  483855 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:15:16.836718  483855 retry.go:31] will retry after 20.079248911s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:15:16.979513  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:15:19.479421  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:15:21.979452  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:15:24.479358  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:15:26.979383  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:15:29.479315  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:15:31.979207  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:15:33.979957  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:15:36.479122  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:15:36.916703  483855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1014 20:15:36.972468  483855 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:15:36.972659  483855 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1014 20:15:38.479432  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:15:40.479519  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:15:42.979247  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:15:44.107661  483855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:15:44.164144  483855 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:15:44.164298  483855 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1014 20:15:44.166449  483855 out.go:179] * Enabled addons: 
	I1014 20:15:44.168164  483855 addons.go:514] duration metric: took 1m25.81898405s for enable addons: enabled=[]
	W1014 20:15:45.479183  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:15:47.479310  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:15:49.479858  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:15:51.979311  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:15:54.479230  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:15:56.479720  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:15:58.979411  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:16:00.979559  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:16:02.979849  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:16:05.479140  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:16:07.479361  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:16:09.979382  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:16:11.979429  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:16:14.479378  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:16:16.479725  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:16:18.480011  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:16:20.980035  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:16:23.479551  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:16:25.979494  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:16:28.479551  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:16:30.979602  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:16:33.479461  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:16:35.979497  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:16:38.479632  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:16:40.979546  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:16:43.479475  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:16:45.979520  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:16:48.479620  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:16:50.979569  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:16:53.479264  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:16:55.480008  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:16:57.979251  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:17:00.479382  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:17:02.979448  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:17:05.479351  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:17:07.479918  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:17:09.979503  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:17:12.479497  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:17:14.979391  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:17:17.479309  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:17:19.979174  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:17:21.979970  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:17:24.479629  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:17:26.979621  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:17:29.479384  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:17:31.479472  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:17:33.979336  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:17:36.479209  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:17:38.479928  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:17:40.979700  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:17:43.479435  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:17:45.480477  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:17:47.979204  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:17:49.979875  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:17:51.979997  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:17:54.479887  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:17:56.979877  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:17:59.480077  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:18:01.979036  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:18:03.979683  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:18:06.479668  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:18:08.479831  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:18:10.979604  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:18:13.479302  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:18:15.479879  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:18:17.979308  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:18:20.479378  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:18:22.979289  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:18:24.979637  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:18:27.479437  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:18:29.479998  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:18:31.979096  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:18:33.979838  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:18:36.479742  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:18:38.979613  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:18:41.479436  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:18:43.979207  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:18:46.479173  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:18:48.479845  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:18:50.979809  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:18:53.479746  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:18:55.979859  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:18:58.479881  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:19:00.979573  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:19:03.479418  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:19:05.479980  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:19:07.979038  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:19:09.979338  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:19:11.979815  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:19:14.479700  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:19:16.979525  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:19:19.479424  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:19:21.979395  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:19:24.479442  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:19:26.479811  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:19:28.979862  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:19:31.479976  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:19:33.979862  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:19:36.479814  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:19:38.979789  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:19:41.479881  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:19:43.979978  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:19:46.479123  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:19:48.479924  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:19:50.979839  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:19:53.479871  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:19:55.979681  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:19:58.479612  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:20:00.979818  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:20:03.479734  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:20:05.979638  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:20:08.479613  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:20:10.979828  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:20:13.479594  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:20:15.979749  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:20:18.479349  483855 node_ready.go:38] duration metric: took 6m0.000903914s for node "ha-579393" to be "Ready" ...
	I1014 20:20:18.482302  483855 out.go:203] 
	W1014 20:20:18.483783  483855 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1014 20:20:18.483798  483855 out.go:285] * 
	W1014 20:20:18.485408  483855 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 20:20:18.486562  483855 out.go:203] 
	
	
	==> CRI-O <==
	Oct 14 20:20:20 ha-579393 crio[520]: time="2025-10-14T20:20:20.282745337Z" level=info msg="createCtr: removing container 9472d68ec5a52c5e4515b3cae44ea641e5bba3e9248212fbf94e5ebeb00b2b4b" id=14fe7ca3-9c82-4eb7-a37a-6d6f2d1ceb98 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:20:20 ha-579393 crio[520]: time="2025-10-14T20:20:20.28279696Z" level=info msg="createCtr: deleting container 9472d68ec5a52c5e4515b3cae44ea641e5bba3e9248212fbf94e5ebeb00b2b4b from storage" id=14fe7ca3-9c82-4eb7-a37a-6d6f2d1ceb98 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:20:20 ha-579393 crio[520]: time="2025-10-14T20:20:20.284936971Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-579393_kube-system_949fee8892a6b2444a3aa0dec92a7837_0" id=14fe7ca3-9c82-4eb7-a37a-6d6f2d1ceb98 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:20:21 ha-579393 crio[520]: time="2025-10-14T20:20:21.252746473Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=3414ded5-f4b6-4555-9c7b-cab70f7fb57f name=/runtime.v1.ImageService/ImageStatus
	Oct 14 20:20:21 ha-579393 crio[520]: time="2025-10-14T20:20:21.252916082Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=1e27712d-22ee-4730-ba69-cc063c9ddbaa name=/runtime.v1.ImageService/ImageStatus
	Oct 14 20:20:21 ha-579393 crio[520]: time="2025-10-14T20:20:21.253657899Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=b0e85593-55f9-4ef7-ab4e-6ced0623be98 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 20:20:21 ha-579393 crio[520]: time="2025-10-14T20:20:21.253714738Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=6b93ff5a-d29a-463d-b3d2-32c3c4215c13 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 20:20:21 ha-579393 crio[520]: time="2025-10-14T20:20:21.25465626Z" level=info msg="Creating container: kube-system/kube-apiserver-ha-579393/kube-apiserver" id=9ae0f06e-9cc3-4d42-a3aa-0f3464b29678 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:20:21 ha-579393 crio[520]: time="2025-10-14T20:20:21.254879676Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-579393/kube-controller-manager" id=8791f457-b5db-4a96-91e7-cee927426181 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:20:21 ha-579393 crio[520]: time="2025-10-14T20:20:21.254947237Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:20:21 ha-579393 crio[520]: time="2025-10-14T20:20:21.255146025Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:20:21 ha-579393 crio[520]: time="2025-10-14T20:20:21.260180508Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:20:21 ha-579393 crio[520]: time="2025-10-14T20:20:21.260801545Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:20:21 ha-579393 crio[520]: time="2025-10-14T20:20:21.261925004Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:20:21 ha-579393 crio[520]: time="2025-10-14T20:20:21.262462592Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:20:21 ha-579393 crio[520]: time="2025-10-14T20:20:21.277220644Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=9ae0f06e-9cc3-4d42-a3aa-0f3464b29678 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:20:21 ha-579393 crio[520]: time="2025-10-14T20:20:21.278499909Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=8791f457-b5db-4a96-91e7-cee927426181 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:20:21 ha-579393 crio[520]: time="2025-10-14T20:20:21.278912835Z" level=info msg="createCtr: deleting container ID 105b807233149364a456f28ae495395abfb31ebb94cf98549dfee4f01ca3c745 from idIndex" id=9ae0f06e-9cc3-4d42-a3aa-0f3464b29678 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:20:21 ha-579393 crio[520]: time="2025-10-14T20:20:21.278943214Z" level=info msg="createCtr: removing container 105b807233149364a456f28ae495395abfb31ebb94cf98549dfee4f01ca3c745" id=9ae0f06e-9cc3-4d42-a3aa-0f3464b29678 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:20:21 ha-579393 crio[520]: time="2025-10-14T20:20:21.278973857Z" level=info msg="createCtr: deleting container 105b807233149364a456f28ae495395abfb31ebb94cf98549dfee4f01ca3c745 from storage" id=9ae0f06e-9cc3-4d42-a3aa-0f3464b29678 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:20:21 ha-579393 crio[520]: time="2025-10-14T20:20:21.27998729Z" level=info msg="createCtr: deleting container ID ff0202f9b7521f3cc60f195a45ea5f738d55bd9adb80d36da400b9782ac33726 from idIndex" id=8791f457-b5db-4a96-91e7-cee927426181 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:20:21 ha-579393 crio[520]: time="2025-10-14T20:20:21.280023677Z" level=info msg="createCtr: removing container ff0202f9b7521f3cc60f195a45ea5f738d55bd9adb80d36da400b9782ac33726" id=8791f457-b5db-4a96-91e7-cee927426181 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:20:21 ha-579393 crio[520]: time="2025-10-14T20:20:21.280060446Z" level=info msg="createCtr: deleting container ff0202f9b7521f3cc60f195a45ea5f738d55bd9adb80d36da400b9782ac33726 from storage" id=8791f457-b5db-4a96-91e7-cee927426181 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:20:21 ha-579393 crio[520]: time="2025-10-14T20:20:21.282618815Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-ha-579393_kube-system_06869f9c490a5fffb50940ac23939d18_0" id=9ae0f06e-9cc3-4d42-a3aa-0f3464b29678 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:20:21 ha-579393 crio[520]: time="2025-10-14T20:20:21.282987169Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-579393_kube-system_514451ea1eb9e52e24cc36daace2ea4a_0" id=8791f457-b5db-4a96-91e7-cee927426181 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 20:20:21.405050    2207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:20:21.405595    2207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:20:21.407210    2207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:20:21.407655    2207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:20:21.409220    2207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 91 42 a8 46 f5 08 06
	[ +33.952077] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff aa 17 60 38 7c 63 08 06
	[  +0.961658] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ba 77 81 98 05 a1 08 06
	[  +0.043334] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 8a 2f 57 55 4c d7 08 06
	[  +4.681081] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f6 ac 51 a4 b2 5a 08 06
	[Oct14 19:04] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e6 d1 de e1 85 25 08 06
	[  +0.899784] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ae f2 fe 51 3d 95 08 06
	[  +0.040749] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 82 c6 36 89 b8 4a 08 06
	[  +4.162603] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c6 62 d5 5c 0e ef 08 06
	[ +30.801371] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 06 ae e5 28 c7 39 08 06
	[  +0.817897] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ae 5e 5e 90 62 1a 08 06
	[  +0.046959] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 96 8f 1b bd b9 9f 08 06
	[Oct14 19:05] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 56 59 c3 7c db c4 08 06
	
	
	==> kernel <==
	 20:20:21 up  3:02,  0 user,  load average: 0.05, 0.09, 0.35
	Linux ha-579393 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 14 20:20:20 ha-579393 kubelet[672]: E1014 20:20:20.072815     672 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-579393"
	Oct 14 20:20:20 ha-579393 kubelet[672]: E1014 20:20:20.253390     672 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-579393\" not found" node="ha-579393"
	Oct 14 20:20:20 ha-579393 kubelet[672]: E1014 20:20:20.285305     672 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 14 20:20:20 ha-579393 kubelet[672]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 14 20:20:20 ha-579393 kubelet[672]:  > podSandboxID="5976dd15b049c7652330b69d52defb82171fed7efe8ed4c61c12be5bf2a58f46"
	Oct 14 20:20:20 ha-579393 kubelet[672]: E1014 20:20:20.285424     672 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 14 20:20:20 ha-579393 kubelet[672]:         container etcd start failed in pod etcd-ha-579393_kube-system(949fee8892a6b2444a3aa0dec92a7837): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 14 20:20:20 ha-579393 kubelet[672]:  > logger="UnhandledError"
	Oct 14 20:20:20 ha-579393 kubelet[672]: E1014 20:20:20.285467     672 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-ha-579393" podUID="949fee8892a6b2444a3aa0dec92a7837"
	Oct 14 20:20:21 ha-579393 kubelet[672]: E1014 20:20:21.252311     672 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-579393\" not found" node="ha-579393"
	Oct 14 20:20:21 ha-579393 kubelet[672]: E1014 20:20:21.252474     672 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-579393\" not found" node="ha-579393"
	Oct 14 20:20:21 ha-579393 kubelet[672]: E1014 20:20:21.283034     672 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 14 20:20:21 ha-579393 kubelet[672]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 14 20:20:21 ha-579393 kubelet[672]:  > podSandboxID="9dc8cd32451cd7b221667650ba3c14209dc0945a835fe89701346af7832b9a20"
	Oct 14 20:20:21 ha-579393 kubelet[672]: E1014 20:20:21.283151     672 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 14 20:20:21 ha-579393 kubelet[672]:         container kube-apiserver start failed in pod kube-apiserver-ha-579393_kube-system(06869f9c490a5fffb50940ac23939d18): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 14 20:20:21 ha-579393 kubelet[672]:  > logger="UnhandledError"
	Oct 14 20:20:21 ha-579393 kubelet[672]: E1014 20:20:21.283195     672 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-ha-579393" podUID="06869f9c490a5fffb50940ac23939d18"
	Oct 14 20:20:21 ha-579393 kubelet[672]: E1014 20:20:21.283215     672 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 14 20:20:21 ha-579393 kubelet[672]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 14 20:20:21 ha-579393 kubelet[672]:  > podSandboxID="f180c4075f618508cee2088a1ba338b9bc1be40472f118acf4ce26f50c5f9c95"
	Oct 14 20:20:21 ha-579393 kubelet[672]: E1014 20:20:21.283282     672 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 14 20:20:21 ha-579393 kubelet[672]:         container kube-controller-manager start failed in pod kube-controller-manager-ha-579393_kube-system(514451ea1eb9e52e24cc36daace2ea4a): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 14 20:20:21 ha-579393 kubelet[672]:  > logger="UnhandledError"
	Oct 14 20:20:21 ha-579393 kubelet[672]: E1014 20:20:21.284520     672 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-579393" podUID="514451ea1eb9e52e24cc36daace2ea4a"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-579393 -n ha-579393
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-579393 -n ha-579393: exit status 2 (309.719579ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "ha-579393" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (1.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (1.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:415: expected profile "ha-579393" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-579393\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-579393\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSS
haresRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-579393\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":nul
l,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list
--output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-579393
helpers_test.go:243: (dbg) docker inspect ha-579393:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2",
	        "Created": "2025-10-14T20:03:22.416166993Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 484051,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-14T20:14:11.163807444Z",
	            "FinishedAt": "2025-10-14T20:14:10.012215428Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2/hostname",
	        "HostsPath": "/var/lib/docker/containers/e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2/hosts",
	        "LogPath": "/var/lib/docker/containers/e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2/e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2-json.log",
	        "Name": "/ha-579393",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-579393:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-579393",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2",
	                "LowerDir": "/var/lib/docker/overlay2/f2862f7f555e5163c5fb738e59e646f87e5b6f6dc38621bbd5cf8a3ccf2dc40d-init/diff:/var/lib/docker/overlay2/51cca15559205c73df9571f03495ed8a29f085405673af5fef2c1ba7c695d8af/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f2862f7f555e5163c5fb738e59e646f87e5b6f6dc38621bbd5cf8a3ccf2dc40d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f2862f7f555e5163c5fb738e59e646f87e5b6f6dc38621bbd5cf8a3ccf2dc40d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f2862f7f555e5163c5fb738e59e646f87e5b6f6dc38621bbd5cf8a3ccf2dc40d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-579393",
	                "Source": "/var/lib/docker/volumes/ha-579393/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-579393",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-579393",
	                "name.minikube.sigs.k8s.io": "ha-579393",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8e28d83e308e6af894f39de197ae2094de94e5854c96689c87348eaa90862b2d",
	            "SandboxKey": "/var/run/docker/netns/8e28d83e308e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32908"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32909"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32912"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32910"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32911"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-579393": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ee:4c:9d:2b:25:15",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d686e445c44d566d468791372c74214f3cfc87adaf2fbedd68822ff7d3ce8955",
	                    "EndpointID": "a2ff963b7135d73b1a96b4db82df9afe9bb179109a039548974da4d367372ba7",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-579393",
	                        "e999454f4483"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-579393 -n ha-579393
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-579393 -n ha-579393: exit status 2 (308.510548ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-579393 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                    ARGS                                     │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ kubectl │ ha-579393 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml            │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl │ ha-579393 kubectl -- rollout status deployment/busybox                      │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:12 UTC │                     │
	│ kubectl │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:12 UTC │                     │
	│ kubectl │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:12 UTC │                     │
	│ kubectl │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ kubectl │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'       │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ kubectl │ ha-579393 kubectl -- exec  -- nslookup kubernetes.io                        │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ kubectl │ ha-579393 kubectl -- exec  -- nslookup kubernetes.default                   │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ kubectl │ ha-579393 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ kubectl │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'       │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ node    │ ha-579393 node add --alsologtostderr -v 5                                   │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ node    │ ha-579393 node stop m02 --alsologtostderr -v 5                              │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ node    │ ha-579393 node start m02 --alsologtostderr -v 5                             │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ node    │ ha-579393 node list --alsologtostderr -v 5                                  │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:14 UTC │                     │
	│ stop    │ ha-579393 stop --alsologtostderr -v 5                                       │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:14 UTC │ 14 Oct 25 20:14 UTC │
	│ start   │ ha-579393 start --wait true --alsologtostderr -v 5                          │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:14 UTC │                     │
	│ node    │ ha-579393 node list --alsologtostderr -v 5                                  │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:20 UTC │                     │
	│ node    │ ha-579393 node delete m03 --alsologtostderr -v 5                            │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:20 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/14 20:14:10
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1014 20:14:10.920500  483855 out.go:360] Setting OutFile to fd 1 ...
	I1014 20:14:10.920744  483855 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:14:10.920765  483855 out.go:374] Setting ErrFile to fd 2...
	I1014 20:14:10.920770  483855 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:14:10.920950  483855 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-413763/.minikube/bin
	I1014 20:14:10.921400  483855 out.go:368] Setting JSON to false
	I1014 20:14:10.922307  483855 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":10597,"bootTime":1760462254,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1014 20:14:10.922423  483855 start.go:141] virtualization: kvm guest
	I1014 20:14:10.924678  483855 out.go:179] * [ha-579393] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1014 20:14:10.925922  483855 out.go:179]   - MINIKUBE_LOCATION=21409
	I1014 20:14:10.925932  483855 notify.go:220] Checking for updates...
	I1014 20:14:10.928150  483855 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 20:14:10.929578  483855 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-413763/kubeconfig
	I1014 20:14:10.931110  483855 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-413763/.minikube
	I1014 20:14:10.932372  483855 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1014 20:14:10.933593  483855 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 20:14:10.935251  483855 config.go:182] Loaded profile config "ha-579393": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:14:10.935376  483855 driver.go:421] Setting default libvirt URI to qemu:///system
	I1014 20:14:10.960161  483855 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1014 20:14:10.960301  483855 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 20:14:11.020772  483855 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-14 20:14:11.009250952 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1014 20:14:11.020894  483855 docker.go:318] overlay module found
	I1014 20:14:11.022935  483855 out.go:179] * Using the docker driver based on existing profile
	I1014 20:14:11.024198  483855 start.go:305] selected driver: docker
	I1014 20:14:11.024214  483855 start.go:925] validating driver "docker" against &{Name:ha-579393 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-579393 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 20:14:11.024304  483855 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 20:14:11.024438  483855 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 20:14:11.086893  483855 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-14 20:14:11.076411866 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1014 20:14:11.087678  483855 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 20:14:11.087721  483855 cni.go:84] Creating CNI manager for ""
	I1014 20:14:11.087800  483855 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1014 20:14:11.087868  483855 start.go:349] cluster config:
	{Name:ha-579393 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-579393 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GP
Us: AutoPauseInterval:1m0s}
	I1014 20:14:11.090005  483855 out.go:179] * Starting "ha-579393" primary control-plane node in "ha-579393" cluster
	I1014 20:14:11.091314  483855 cache.go:123] Beginning downloading kic base image for docker with crio
	I1014 20:14:11.092803  483855 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1014 20:14:11.094111  483855 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 20:14:11.094148  483855 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-413763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1014 20:14:11.094156  483855 cache.go:58] Caching tarball of preloaded images
	I1014 20:14:11.094218  483855 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1014 20:14:11.094241  483855 preload.go:233] Found /home/jenkins/minikube-integration/21409-413763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1014 20:14:11.094277  483855 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1014 20:14:11.094382  483855 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/config.json ...
	I1014 20:14:11.115783  483855 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1014 20:14:11.115808  483855 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1014 20:14:11.115828  483855 cache.go:232] Successfully downloaded all kic artifacts
	I1014 20:14:11.115855  483855 start.go:360] acquireMachinesLock for ha-579393: {Name:mk1f05a584fb3418ecbc78031a467c0e06c7a892 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 20:14:11.115928  483855 start.go:364] duration metric: took 47.72µs to acquireMachinesLock for "ha-579393"
	I1014 20:14:11.115949  483855 start.go:96] Skipping create...Using existing machine configuration
	I1014 20:14:11.115957  483855 fix.go:54] fixHost starting: 
	I1014 20:14:11.116246  483855 cli_runner.go:164] Run: docker container inspect ha-579393 --format={{.State.Status}}
	I1014 20:14:11.133733  483855 fix.go:112] recreateIfNeeded on ha-579393: state=Stopped err=<nil>
	W1014 20:14:11.133784  483855 fix.go:138] unexpected machine state, will restart: <nil>
	I1014 20:14:11.135788  483855 out.go:252] * Restarting existing docker container for "ha-579393" ...
	I1014 20:14:11.135872  483855 cli_runner.go:164] Run: docker start ha-579393
	I1014 20:14:11.385558  483855 cli_runner.go:164] Run: docker container inspect ha-579393 --format={{.State.Status}}
	I1014 20:14:11.406160  483855 kic.go:430] container "ha-579393" state is running.
	I1014 20:14:11.406595  483855 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-579393
	I1014 20:14:11.427601  483855 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/config.json ...
	I1014 20:14:11.427957  483855 machine.go:93] provisionDockerMachine start ...
	I1014 20:14:11.428045  483855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:14:11.447692  483855 main.go:141] libmachine: Using SSH client type: native
	I1014 20:14:11.447993  483855 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32908 <nil> <nil>}
	I1014 20:14:11.448015  483855 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 20:14:11.448627  483855 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:43868->127.0.0.1:32908: read: connection reset by peer
	I1014 20:14:14.598160  483855 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-579393
	
	I1014 20:14:14.598192  483855 ubuntu.go:182] provisioning hostname "ha-579393"
	I1014 20:14:14.598246  483855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:14:14.616421  483855 main.go:141] libmachine: Using SSH client type: native
	I1014 20:14:14.616660  483855 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32908 <nil> <nil>}
	I1014 20:14:14.616677  483855 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-579393 && echo "ha-579393" | sudo tee /etc/hostname
	I1014 20:14:14.772817  483855 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-579393
	
	I1014 20:14:14.772902  483855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:14:14.791583  483855 main.go:141] libmachine: Using SSH client type: native
	I1014 20:14:14.791976  483855 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32908 <nil> <nil>}
	I1014 20:14:14.792005  483855 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-579393' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-579393/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-579393' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 20:14:14.941113  483855 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 20:14:14.941153  483855 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-413763/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-413763/.minikube}
	I1014 20:14:14.941182  483855 ubuntu.go:190] setting up certificates
	I1014 20:14:14.941192  483855 provision.go:84] configureAuth start
	I1014 20:14:14.941248  483855 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-579393
	I1014 20:14:14.959540  483855 provision.go:143] copyHostCerts
	I1014 20:14:14.959581  483855 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem
	I1014 20:14:14.959610  483855 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem, removing ...
	I1014 20:14:14.959626  483855 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem
	I1014 20:14:14.959736  483855 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem (1078 bytes)
	I1014 20:14:14.959861  483855 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem
	I1014 20:14:14.959885  483855 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem, removing ...
	I1014 20:14:14.959890  483855 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem
	I1014 20:14:14.959924  483855 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem (1123 bytes)
	I1014 20:14:14.959979  483855 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem
	I1014 20:14:14.959996  483855 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem, removing ...
	I1014 20:14:14.960003  483855 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem
	I1014 20:14:14.960029  483855 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem (1675 bytes)
	I1014 20:14:14.960082  483855 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca-key.pem org=jenkins.ha-579393 san=[127.0.0.1 192.168.49.2 ha-579393 localhost minikube]
	I1014 20:14:15.029188  483855 provision.go:177] copyRemoteCerts
	I1014 20:14:15.029258  483855 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 20:14:15.029297  483855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:14:15.048357  483855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:14:15.153017  483855 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1014 20:14:15.153075  483855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 20:14:15.172076  483855 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1014 20:14:15.172147  483855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1014 20:14:15.191156  483855 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1014 20:14:15.191247  483855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1014 20:14:15.209919  483855 provision.go:87] duration metric: took 268.700795ms to configureAuth
	I1014 20:14:15.209952  483855 ubuntu.go:206] setting minikube options for container-runtime
	I1014 20:14:15.210139  483855 config.go:182] Loaded profile config "ha-579393": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:14:15.210238  483855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:14:15.228740  483855 main.go:141] libmachine: Using SSH client type: native
	I1014 20:14:15.229042  483855 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32908 <nil> <nil>}
	I1014 20:14:15.229063  483855 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 20:14:15.497697  483855 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 20:14:15.497742  483855 machine.go:96] duration metric: took 4.069763695s to provisionDockerMachine
	I1014 20:14:15.497775  483855 start.go:293] postStartSetup for "ha-579393" (driver="docker")
	I1014 20:14:15.497793  483855 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 20:14:15.497866  483855 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 20:14:15.497946  483855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:14:15.517186  483855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:14:15.621698  483855 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 20:14:15.625589  483855 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1014 20:14:15.625615  483855 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1014 20:14:15.625644  483855 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-413763/.minikube/addons for local assets ...
	I1014 20:14:15.625730  483855 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-413763/.minikube/files for local assets ...
	I1014 20:14:15.625831  483855 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem -> 4173732.pem in /etc/ssl/certs
	I1014 20:14:15.625845  483855 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem -> /etc/ssl/certs/4173732.pem
	I1014 20:14:15.625954  483855 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 20:14:15.634004  483855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem --> /etc/ssl/certs/4173732.pem (1708 bytes)
	I1014 20:14:15.652848  483855 start.go:296] duration metric: took 155.05253ms for postStartSetup
	I1014 20:14:15.652947  483855 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1014 20:14:15.652999  483855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:14:15.671676  483855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:14:15.772231  483855 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1014 20:14:15.777428  483855 fix.go:56] duration metric: took 4.661453251s for fixHost
	I1014 20:14:15.777461  483855 start.go:83] releasing machines lock for "ha-579393", held for 4.661520575s
	I1014 20:14:15.777540  483855 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-579393
	I1014 20:14:15.795367  483855 ssh_runner.go:195] Run: cat /version.json
	I1014 20:14:15.795414  483855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:14:15.795438  483855 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 20:14:15.795537  483855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:14:15.813628  483855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:14:15.814338  483855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:14:15.974436  483855 ssh_runner.go:195] Run: systemctl --version
	I1014 20:14:15.981351  483855 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 20:14:16.017956  483855 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 20:14:16.023150  483855 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 20:14:16.023222  483855 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 20:14:16.031654  483855 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1014 20:14:16.031679  483855 start.go:495] detecting cgroup driver to use...
	I1014 20:14:16.031717  483855 detect.go:190] detected "systemd" cgroup driver on host os
	I1014 20:14:16.031802  483855 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 20:14:16.048436  483855 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 20:14:16.061476  483855 docker.go:218] disabling cri-docker service (if available) ...
	I1014 20:14:16.061544  483855 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 20:14:16.076780  483855 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 20:14:16.090012  483855 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 20:14:16.170317  483855 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 20:14:16.252741  483855 docker.go:234] disabling docker service ...
	I1014 20:14:16.252834  483855 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 20:14:16.268133  483855 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 20:14:16.281337  483855 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 20:14:16.362683  483855 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 20:14:16.445975  483855 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 20:14:16.459439  483855 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 20:14:16.474704  483855 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1014 20:14:16.474792  483855 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:14:16.484201  483855 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1014 20:14:16.484258  483855 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:14:16.493514  483855 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:14:16.502774  483855 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:14:16.511809  483855 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 20:14:16.520391  483855 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:14:16.529310  483855 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:14:16.538091  483855 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:14:16.547967  483855 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 20:14:16.555555  483855 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 20:14:16.562993  483855 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 20:14:16.640477  483855 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 20:14:16.746213  483855 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 20:14:16.746283  483855 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 20:14:16.750894  483855 start.go:563] Will wait 60s for crictl version
	I1014 20:14:16.750948  483855 ssh_runner.go:195] Run: which crictl
	I1014 20:14:16.754907  483855 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1014 20:14:16.780375  483855 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1014 20:14:16.780469  483855 ssh_runner.go:195] Run: crio --version
	I1014 20:14:16.809161  483855 ssh_runner.go:195] Run: crio --version
	I1014 20:14:16.841558  483855 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1014 20:14:16.843303  483855 cli_runner.go:164] Run: docker network inspect ha-579393 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1014 20:14:16.860993  483855 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1014 20:14:16.865603  483855 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 20:14:16.876543  483855 kubeadm.go:883] updating cluster {Name:ha-579393 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-579393 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClien
tPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 20:14:16.876681  483855 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 20:14:16.876735  483855 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 20:14:16.910114  483855 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 20:14:16.910136  483855 crio.go:433] Images already preloaded, skipping extraction
	I1014 20:14:16.910188  483855 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 20:14:16.938328  483855 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 20:14:16.938351  483855 cache_images.go:85] Images are preloaded, skipping loading
	I1014 20:14:16.938359  483855 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1014 20:14:16.938454  483855 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-579393 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-579393 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 20:14:16.938514  483855 ssh_runner.go:195] Run: crio config
	I1014 20:14:16.986141  483855 cni.go:84] Creating CNI manager for ""
	I1014 20:14:16.986163  483855 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1014 20:14:16.986185  483855 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1014 20:14:16.986206  483855 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-579393 NodeName:ha-579393 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1014 20:14:16.986342  483855 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-579393"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 20:14:16.986402  483855 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1014 20:14:16.994771  483855 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 20:14:16.994843  483855 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1014 20:14:17.002873  483855 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1014 20:14:17.016475  483855 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 20:14:17.030150  483855 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1014 20:14:17.044797  483855 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1014 20:14:17.048950  483855 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 20:14:17.059592  483855 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 20:14:17.138128  483855 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 20:14:17.162331  483855 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393 for IP: 192.168.49.2
	I1014 20:14:17.162363  483855 certs.go:195] generating shared ca certs ...
	I1014 20:14:17.162382  483855 certs.go:227] acquiring lock for ca certs: {Name:mk332cc83f1e1747534fc81e896bed94f24941d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:14:17.162522  483855 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.key
	I1014 20:14:17.162565  483855 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.key
	I1014 20:14:17.162572  483855 certs.go:257] generating profile certs ...
	I1014 20:14:17.162658  483855 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/client.key
	I1014 20:14:17.162681  483855 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key.d4ee92c1
	I1014 20:14:17.162721  483855 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt.d4ee92c1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1014 20:14:17.668666  483855 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt.d4ee92c1 ...
	I1014 20:14:17.668699  483855 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt.d4ee92c1: {Name:mk8a02e133127c09314986455d50a58a5753fa21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:14:17.668891  483855 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key.d4ee92c1 ...
	I1014 20:14:17.668912  483855 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key.d4ee92c1: {Name:mk7efe25de40b153ba4b5ad91ea2ae7247892281 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:14:17.669022  483855 certs.go:382] copying /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt.d4ee92c1 -> /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt
	I1014 20:14:17.669169  483855 certs.go:386] copying /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key.d4ee92c1 -> /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key
	I1014 20:14:17.669333  483855 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.key
	I1014 20:14:17.669353  483855 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1014 20:14:17.669372  483855 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1014 20:14:17.669388  483855 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1014 20:14:17.669407  483855 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1014 20:14:17.669426  483855 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1014 20:14:17.669444  483855 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1014 20:14:17.669462  483855 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1014 20:14:17.669483  483855 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1014 20:14:17.669558  483855 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/417373.pem (1338 bytes)
	W1014 20:14:17.669604  483855 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-413763/.minikube/certs/417373_empty.pem, impossibly tiny 0 bytes
	I1014 20:14:17.669618  483855 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca-key.pem (1675 bytes)
	I1014 20:14:17.669651  483855 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem (1078 bytes)
	I1014 20:14:17.669682  483855 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem (1123 bytes)
	I1014 20:14:17.669720  483855 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem (1675 bytes)
	I1014 20:14:17.669797  483855 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem (1708 bytes)
	I1014 20:14:17.669838  483855 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:14:17.669858  483855 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/417373.pem -> /usr/share/ca-certificates/417373.pem
	I1014 20:14:17.669877  483855 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem -> /usr/share/ca-certificates/4173732.pem
	I1014 20:14:17.670422  483855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 20:14:17.694853  483855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1014 20:14:17.714851  483855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 20:14:17.735389  483855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1014 20:14:17.753540  483855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1014 20:14:17.771256  483855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1014 20:14:17.788532  483855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 20:14:17.806351  483855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1014 20:14:17.824287  483855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 20:14:17.842307  483855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/certs/417373.pem --> /usr/share/ca-certificates/417373.pem (1338 bytes)
	I1014 20:14:17.860337  483855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem --> /usr/share/ca-certificates/4173732.pem (1708 bytes)
	I1014 20:14:17.877985  483855 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 20:14:17.891323  483855 ssh_runner.go:195] Run: openssl version
	I1014 20:14:17.898069  483855 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 20:14:17.907081  483855 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:14:17.911053  483855 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 19:15 /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:14:17.911125  483855 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:14:17.945120  483855 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 20:14:17.953947  483855 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/417373.pem && ln -fs /usr/share/ca-certificates/417373.pem /etc/ssl/certs/417373.pem"
	I1014 20:14:17.962658  483855 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/417373.pem
	I1014 20:14:17.966544  483855 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 19:32 /usr/share/ca-certificates/417373.pem
	I1014 20:14:17.966600  483855 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/417373.pem
	I1014 20:14:18.000895  483855 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/417373.pem /etc/ssl/certs/51391683.0"
	I1014 20:14:18.009773  483855 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4173732.pem && ln -fs /usr/share/ca-certificates/4173732.pem /etc/ssl/certs/4173732.pem"
	I1014 20:14:18.018643  483855 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4173732.pem
	I1014 20:14:18.022518  483855 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 19:32 /usr/share/ca-certificates/4173732.pem
	I1014 20:14:18.022596  483855 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4173732.pem
	I1014 20:14:18.057289  483855 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4173732.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 20:14:18.065942  483855 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 20:14:18.070012  483855 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1014 20:14:18.104032  483855 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1014 20:14:18.138282  483855 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1014 20:14:18.173092  483855 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1014 20:14:18.207994  483855 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1014 20:14:18.243571  483855 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1014 20:14:18.292429  483855 kubeadm.go:400] StartCluster: {Name:ha-579393 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-579393 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPa
th: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 20:14:18.292512  483855 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 20:14:18.292580  483855 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 20:14:18.321112  483855 cri.go:89] found id: ""
	I1014 20:14:18.321184  483855 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 20:14:18.329855  483855 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1014 20:14:18.329881  483855 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1014 20:14:18.329939  483855 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1014 20:14:18.337735  483855 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1014 20:14:18.338246  483855 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-579393" does not appear in /home/jenkins/minikube-integration/21409-413763/kubeconfig
	I1014 20:14:18.338385  483855 kubeconfig.go:62] /home/jenkins/minikube-integration/21409-413763/kubeconfig needs updating (will repair): [kubeconfig missing "ha-579393" cluster setting kubeconfig missing "ha-579393" context setting]
	I1014 20:14:18.338768  483855 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/kubeconfig: {Name:mk8f52e50cf00a1f90024c40a36e49753857e33b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:14:18.339441  483855 kapi.go:59] client config for ha-579393: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/client.crt", KeyFile:"/home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/client.key", CAFile:"/home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1014 20:14:18.340024  483855 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1014 20:14:18.340045  483855 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1014 20:14:18.340052  483855 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1014 20:14:18.340058  483855 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1014 20:14:18.340056  483855 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1014 20:14:18.340064  483855 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1014 20:14:18.340494  483855 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1014 20:14:18.348268  483855 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1014 20:14:18.348302  483855 kubeadm.go:601] duration metric: took 18.415642ms to restartPrimaryControlPlane
	I1014 20:14:18.348309  483855 kubeadm.go:402] duration metric: took 55.890314ms to StartCluster
	I1014 20:14:18.348323  483855 settings.go:142] acquiring lock: {Name:mke99e63954bc0385c76f9fa1a80091fa7740a6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:14:18.348383  483855 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21409-413763/kubeconfig
	I1014 20:14:18.348891  483855 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/kubeconfig: {Name:mk8f52e50cf00a1f90024c40a36e49753857e33b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:14:18.349121  483855 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 20:14:18.349176  483855 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1014 20:14:18.349275  483855 addons.go:69] Setting storage-provisioner=true in profile "ha-579393"
	I1014 20:14:18.349292  483855 addons.go:69] Setting default-storageclass=true in profile "ha-579393"
	I1014 20:14:18.349334  483855 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-579393"
	I1014 20:14:18.349340  483855 config.go:182] Loaded profile config "ha-579393": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:14:18.349299  483855 addons.go:238] Setting addon storage-provisioner=true in "ha-579393"
	I1014 20:14:18.349402  483855 host.go:66] Checking if "ha-579393" exists ...
	I1014 20:14:18.349612  483855 cli_runner.go:164] Run: docker container inspect ha-579393 --format={{.State.Status}}
	I1014 20:14:18.349733  483855 cli_runner.go:164] Run: docker container inspect ha-579393 --format={{.State.Status}}
	I1014 20:14:18.351949  483855 out.go:179] * Verifying Kubernetes components...
	I1014 20:14:18.353493  483855 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 20:14:18.370229  483855 kapi.go:59] client config for ha-579393: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/client.crt", KeyFile:"/home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/client.key", CAFile:"/home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1014 20:14:18.370618  483855 addons.go:238] Setting addon default-storageclass=true in "ha-579393"
	I1014 20:14:18.370669  483855 host.go:66] Checking if "ha-579393" exists ...
	I1014 20:14:18.371116  483855 cli_runner.go:164] Run: docker container inspect ha-579393 --format={{.State.Status}}
	I1014 20:14:18.372806  483855 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 20:14:18.374593  483855 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 20:14:18.374613  483855 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1014 20:14:18.374661  483855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:14:18.394050  483855 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1014 20:14:18.394079  483855 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1014 20:14:18.394142  483855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:14:18.400808  483855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:14:18.415928  483855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:14:18.464479  483855 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 20:14:18.478392  483855 node_ready.go:35] waiting up to 6m0s for node "ha-579393" to be "Ready" ...
	I1014 20:14:18.511484  483855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 20:14:18.526970  483855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:14:18.569362  483855 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:18.569451  483855 retry.go:31] will retry after 139.387299ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:14:18.586526  483855 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:18.586563  483855 retry.go:31] will retry after 373.05987ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:18.709958  483855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1014 20:14:18.765681  483855 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:18.765716  483855 retry.go:31] will retry after 437.429458ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:18.960052  483855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:14:19.015690  483855 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:19.015732  483855 retry.go:31] will retry after 493.852226ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:19.204088  483855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1014 20:14:19.257288  483855 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:19.257326  483855 retry.go:31] will retry after 639.980295ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:19.510433  483855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:14:19.566057  483855 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:19.566095  483855 retry.go:31] will retry after 314.039838ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:19.880614  483855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1014 20:14:19.898421  483855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1014 20:14:19.942792  483855 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:19.942837  483855 retry.go:31] will retry after 998.489046ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:14:19.959292  483855 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:19.959332  483855 retry.go:31] will retry after 630.832334ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:14:20.480037  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:14:20.591264  483855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1014 20:14:20.646581  483855 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:20.646618  483855 retry.go:31] will retry after 1.015213679s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:20.942176  483855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:14:20.995773  483855 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:20.995805  483855 retry.go:31] will retry after 1.667312943s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:21.662122  483855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1014 20:14:21.716333  483855 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:21.716374  483855 retry.go:31] will retry after 2.064978127s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:22.663519  483855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:14:22.718117  483855 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:22.718148  483855 retry.go:31] will retry after 1.187936777s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:14:22.979048  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:14:23.781666  483855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1014 20:14:23.836207  483855 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:23.836236  483855 retry.go:31] will retry after 4.211845068s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:23.906464  483855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:14:23.962253  483855 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:23.962286  483855 retry.go:31] will retry after 1.446389172s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:14:24.979293  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:14:25.408845  483855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:14:25.464280  483855 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:25.464314  483855 retry.go:31] will retry after 4.913115671s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:14:26.979811  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:14:28.048551  483855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1014 20:14:28.103870  483855 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:28.103926  483855 retry.go:31] will retry after 5.848016942s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:14:29.479152  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:14:30.377796  483855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:14:30.434283  483855 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:30.434318  483855 retry.go:31] will retry after 3.766557474s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:14:31.479517  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:14:33.953096  483855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1014 20:14:33.979303  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:14:34.008898  483855 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:34.008931  483855 retry.go:31] will retry after 9.465931342s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:34.201242  483855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:14:34.257447  483855 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:34.257479  483855 retry.go:31] will retry after 6.854944728s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:14:35.979488  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:14:37.979541  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:14:40.479409  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:14:41.113186  483855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:14:41.169845  483855 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:41.169886  483855 retry.go:31] will retry after 7.326807796s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:14:42.480113  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:14:43.475406  483855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1014 20:14:43.532885  483855 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:43.532922  483855 retry.go:31] will retry after 5.727455615s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:14:44.979266  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:14:47.479387  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:14:48.497090  483855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:14:48.552250  483855 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:48.552292  483855 retry.go:31] will retry after 19.686847261s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:49.260622  483855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1014 20:14:49.315681  483855 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:49.315717  483855 retry.go:31] will retry after 10.36859919s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:14:49.479476  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:14:51.479855  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:14:53.979220  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:14:55.979988  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:14:58.479745  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:14:59.685187  483855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1014 20:14:59.739249  483855 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:14:59.739293  483855 retry.go:31] will retry after 17.039426961s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:15:00.979569  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:15:02.980017  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:15:05.479408  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:15:07.479974  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:15:08.239444  483855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:15:08.295601  483855 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:15:08.295641  483855 retry.go:31] will retry after 35.811237481s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:15:09.979273  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:15:12.479310  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:15:14.480040  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:15:16.779044  483855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1014 20:15:16.836669  483855 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:15:16.836718  483855 retry.go:31] will retry after 20.079248911s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:15:16.979513  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:15:19.479421  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:15:21.979452  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:15:24.479358  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:15:26.979383  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:15:29.479315  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:15:31.979207  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:15:33.979957  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:15:36.479122  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:15:36.916703  483855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1014 20:15:36.972468  483855 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:15:36.972659  483855 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1014 20:15:38.479432  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:15:40.479519  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:15:42.979247  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:15:44.107661  483855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:15:44.164144  483855 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:15:44.164298  483855 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1014 20:15:44.166449  483855 out.go:179] * Enabled addons: 
	I1014 20:15:44.168164  483855 addons.go:514] duration metric: took 1m25.81898405s for enable addons: enabled=[]
	W1014 20:15:45.479183  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:15:47.479310  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:15:49.479858  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:15:51.979311  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:15:54.479230  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:15:56.479720  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:15:58.979411  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:16:00.979559  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:16:02.979849  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:16:05.479140  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:16:07.479361  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:16:09.979382  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:16:11.979429  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:16:14.479378  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:16:16.479725  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:16:18.480011  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:16:20.980035  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:16:23.479551  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:16:25.979494  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:16:28.479551  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:16:30.979602  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:16:33.479461  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:16:35.979497  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:16:38.479632  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:16:40.979546  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:16:43.479475  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:16:45.979520  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:16:48.479620  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:16:50.979569  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:16:53.479264  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:16:55.480008  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:16:57.979251  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:17:00.479382  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:17:02.979448  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:17:05.479351  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:17:07.479918  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:17:09.979503  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:17:12.479497  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:17:14.979391  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:17:17.479309  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:17:19.979174  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:17:21.979970  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:17:24.479629  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:17:26.979621  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:17:29.479384  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:17:31.479472  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:17:33.979336  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:17:36.479209  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:17:38.479928  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:17:40.979700  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:17:43.479435  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:17:45.480477  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:17:47.979204  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:17:49.979875  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:17:51.979997  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:17:54.479887  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:17:56.979877  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:17:59.480077  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:18:01.979036  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:18:03.979683  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:18:06.479668  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:18:08.479831  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:18:10.979604  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:18:13.479302  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:18:15.479879  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:18:17.979308  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:18:20.479378  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:18:22.979289  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:18:24.979637  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:18:27.479437  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:18:29.479998  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:18:31.979096  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:18:33.979838  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:18:36.479742  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:18:38.979613  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:18:41.479436  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:18:43.979207  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:18:46.479173  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:18:48.479845  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:18:50.979809  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:18:53.479746  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:18:55.979859  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:18:58.479881  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:19:00.979573  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:19:03.479418  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:19:05.479980  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:19:07.979038  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:19:09.979338  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:19:11.979815  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:19:14.479700  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:19:16.979525  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:19:19.479424  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:19:21.979395  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:19:24.479442  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:19:26.479811  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:19:28.979862  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:19:31.479976  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:19:33.979862  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:19:36.479814  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:19:38.979789  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:19:41.479881  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:19:43.979978  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:19:46.479123  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:19:48.479924  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:19:50.979839  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:19:53.479871  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:19:55.979681  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:19:58.479612  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:20:00.979818  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:20:03.479734  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:20:05.979638  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:20:08.479613  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:20:10.979828  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:20:13.479594  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:20:15.979749  483855 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:20:18.479349  483855 node_ready.go:38] duration metric: took 6m0.000903914s for node "ha-579393" to be "Ready" ...
	I1014 20:20:18.482302  483855 out.go:203] 
	W1014 20:20:18.483783  483855 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1014 20:20:18.483798  483855 out.go:285] * 
	W1014 20:20:18.485408  483855 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 20:20:18.486562  483855 out.go:203] 
	
	
	==> CRI-O <==
	Oct 14 20:20:20 ha-579393 crio[520]: time="2025-10-14T20:20:20.282745337Z" level=info msg="createCtr: removing container 9472d68ec5a52c5e4515b3cae44ea641e5bba3e9248212fbf94e5ebeb00b2b4b" id=14fe7ca3-9c82-4eb7-a37a-6d6f2d1ceb98 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:20:20 ha-579393 crio[520]: time="2025-10-14T20:20:20.28279696Z" level=info msg="createCtr: deleting container 9472d68ec5a52c5e4515b3cae44ea641e5bba3e9248212fbf94e5ebeb00b2b4b from storage" id=14fe7ca3-9c82-4eb7-a37a-6d6f2d1ceb98 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:20:20 ha-579393 crio[520]: time="2025-10-14T20:20:20.284936971Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-579393_kube-system_949fee8892a6b2444a3aa0dec92a7837_0" id=14fe7ca3-9c82-4eb7-a37a-6d6f2d1ceb98 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:20:21 ha-579393 crio[520]: time="2025-10-14T20:20:21.252746473Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=3414ded5-f4b6-4555-9c7b-cab70f7fb57f name=/runtime.v1.ImageService/ImageStatus
	Oct 14 20:20:21 ha-579393 crio[520]: time="2025-10-14T20:20:21.252916082Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=1e27712d-22ee-4730-ba69-cc063c9ddbaa name=/runtime.v1.ImageService/ImageStatus
	Oct 14 20:20:21 ha-579393 crio[520]: time="2025-10-14T20:20:21.253657899Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=b0e85593-55f9-4ef7-ab4e-6ced0623be98 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 20:20:21 ha-579393 crio[520]: time="2025-10-14T20:20:21.253714738Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=6b93ff5a-d29a-463d-b3d2-32c3c4215c13 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 20:20:21 ha-579393 crio[520]: time="2025-10-14T20:20:21.25465626Z" level=info msg="Creating container: kube-system/kube-apiserver-ha-579393/kube-apiserver" id=9ae0f06e-9cc3-4d42-a3aa-0f3464b29678 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:20:21 ha-579393 crio[520]: time="2025-10-14T20:20:21.254879676Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-579393/kube-controller-manager" id=8791f457-b5db-4a96-91e7-cee927426181 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:20:21 ha-579393 crio[520]: time="2025-10-14T20:20:21.254947237Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:20:21 ha-579393 crio[520]: time="2025-10-14T20:20:21.255146025Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:20:21 ha-579393 crio[520]: time="2025-10-14T20:20:21.260180508Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:20:21 ha-579393 crio[520]: time="2025-10-14T20:20:21.260801545Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:20:21 ha-579393 crio[520]: time="2025-10-14T20:20:21.261925004Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:20:21 ha-579393 crio[520]: time="2025-10-14T20:20:21.262462592Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:20:21 ha-579393 crio[520]: time="2025-10-14T20:20:21.277220644Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=9ae0f06e-9cc3-4d42-a3aa-0f3464b29678 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:20:21 ha-579393 crio[520]: time="2025-10-14T20:20:21.278499909Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=8791f457-b5db-4a96-91e7-cee927426181 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:20:21 ha-579393 crio[520]: time="2025-10-14T20:20:21.278912835Z" level=info msg="createCtr: deleting container ID 105b807233149364a456f28ae495395abfb31ebb94cf98549dfee4f01ca3c745 from idIndex" id=9ae0f06e-9cc3-4d42-a3aa-0f3464b29678 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:20:21 ha-579393 crio[520]: time="2025-10-14T20:20:21.278943214Z" level=info msg="createCtr: removing container 105b807233149364a456f28ae495395abfb31ebb94cf98549dfee4f01ca3c745" id=9ae0f06e-9cc3-4d42-a3aa-0f3464b29678 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:20:21 ha-579393 crio[520]: time="2025-10-14T20:20:21.278973857Z" level=info msg="createCtr: deleting container 105b807233149364a456f28ae495395abfb31ebb94cf98549dfee4f01ca3c745 from storage" id=9ae0f06e-9cc3-4d42-a3aa-0f3464b29678 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:20:21 ha-579393 crio[520]: time="2025-10-14T20:20:21.27998729Z" level=info msg="createCtr: deleting container ID ff0202f9b7521f3cc60f195a45ea5f738d55bd9adb80d36da400b9782ac33726 from idIndex" id=8791f457-b5db-4a96-91e7-cee927426181 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:20:21 ha-579393 crio[520]: time="2025-10-14T20:20:21.280023677Z" level=info msg="createCtr: removing container ff0202f9b7521f3cc60f195a45ea5f738d55bd9adb80d36da400b9782ac33726" id=8791f457-b5db-4a96-91e7-cee927426181 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:20:21 ha-579393 crio[520]: time="2025-10-14T20:20:21.280060446Z" level=info msg="createCtr: deleting container ff0202f9b7521f3cc60f195a45ea5f738d55bd9adb80d36da400b9782ac33726 from storage" id=8791f457-b5db-4a96-91e7-cee927426181 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:20:21 ha-579393 crio[520]: time="2025-10-14T20:20:21.282618815Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-ha-579393_kube-system_06869f9c490a5fffb50940ac23939d18_0" id=9ae0f06e-9cc3-4d42-a3aa-0f3464b29678 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:20:21 ha-579393 crio[520]: time="2025-10-14T20:20:21.282987169Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-579393_kube-system_514451ea1eb9e52e24cc36daace2ea4a_0" id=8791f457-b5db-4a96-91e7-cee927426181 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 20:20:23.056662    2383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:20:23.057167    2383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:20:23.058812    2383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:20:23.059207    2383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:20:23.060802    2383 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 91 42 a8 46 f5 08 06
	[ +33.952077] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff aa 17 60 38 7c 63 08 06
	[  +0.961658] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ba 77 81 98 05 a1 08 06
	[  +0.043334] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 8a 2f 57 55 4c d7 08 06
	[  +4.681081] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f6 ac 51 a4 b2 5a 08 06
	[Oct14 19:04] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e6 d1 de e1 85 25 08 06
	[  +0.899784] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ae f2 fe 51 3d 95 08 06
	[  +0.040749] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 82 c6 36 89 b8 4a 08 06
	[  +4.162603] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c6 62 d5 5c 0e ef 08 06
	[ +30.801371] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 06 ae e5 28 c7 39 08 06
	[  +0.817897] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ae 5e 5e 90 62 1a 08 06
	[  +0.046959] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 96 8f 1b bd b9 9f 08 06
	[Oct14 19:05] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 56 59 c3 7c db c4 08 06
	
	
	==> kernel <==
	 20:20:23 up  3:02,  0 user,  load average: 0.13, 0.10, 0.35
	Linux ha-579393 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 14 20:20:20 ha-579393 kubelet[672]: E1014 20:20:20.072815     672 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-579393"
	Oct 14 20:20:20 ha-579393 kubelet[672]: E1014 20:20:20.253390     672 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-579393\" not found" node="ha-579393"
	Oct 14 20:20:20 ha-579393 kubelet[672]: E1014 20:20:20.285305     672 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 14 20:20:20 ha-579393 kubelet[672]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 14 20:20:20 ha-579393 kubelet[672]:  > podSandboxID="5976dd15b049c7652330b69d52defb82171fed7efe8ed4c61c12be5bf2a58f46"
	Oct 14 20:20:20 ha-579393 kubelet[672]: E1014 20:20:20.285424     672 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 14 20:20:20 ha-579393 kubelet[672]:         container etcd start failed in pod etcd-ha-579393_kube-system(949fee8892a6b2444a3aa0dec92a7837): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 14 20:20:20 ha-579393 kubelet[672]:  > logger="UnhandledError"
	Oct 14 20:20:20 ha-579393 kubelet[672]: E1014 20:20:20.285467     672 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-ha-579393" podUID="949fee8892a6b2444a3aa0dec92a7837"
	Oct 14 20:20:21 ha-579393 kubelet[672]: E1014 20:20:21.252311     672 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-579393\" not found" node="ha-579393"
	Oct 14 20:20:21 ha-579393 kubelet[672]: E1014 20:20:21.252474     672 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-579393\" not found" node="ha-579393"
	Oct 14 20:20:21 ha-579393 kubelet[672]: E1014 20:20:21.283034     672 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 14 20:20:21 ha-579393 kubelet[672]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 14 20:20:21 ha-579393 kubelet[672]:  > podSandboxID="9dc8cd32451cd7b221667650ba3c14209dc0945a835fe89701346af7832b9a20"
	Oct 14 20:20:21 ha-579393 kubelet[672]: E1014 20:20:21.283151     672 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 14 20:20:21 ha-579393 kubelet[672]:         container kube-apiserver start failed in pod kube-apiserver-ha-579393_kube-system(06869f9c490a5fffb50940ac23939d18): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 14 20:20:21 ha-579393 kubelet[672]:  > logger="UnhandledError"
	Oct 14 20:20:21 ha-579393 kubelet[672]: E1014 20:20:21.283195     672 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-ha-579393" podUID="06869f9c490a5fffb50940ac23939d18"
	Oct 14 20:20:21 ha-579393 kubelet[672]: E1014 20:20:21.283215     672 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 14 20:20:21 ha-579393 kubelet[672]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 14 20:20:21 ha-579393 kubelet[672]:  > podSandboxID="f180c4075f618508cee2088a1ba338b9bc1be40472f118acf4ce26f50c5f9c95"
	Oct 14 20:20:21 ha-579393 kubelet[672]: E1014 20:20:21.283282     672 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 14 20:20:21 ha-579393 kubelet[672]:         container kube-controller-manager start failed in pod kube-controller-manager-ha-579393_kube-system(514451ea1eb9e52e24cc36daace2ea4a): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 14 20:20:21 ha-579393 kubelet[672]:  > logger="UnhandledError"
	Oct 14 20:20:21 ha-579393 kubelet[672]: E1014 20:20:21.284520     672 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-579393" podUID="514451ea1eb9e52e24cc36daace2ea4a"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-579393 -n ha-579393
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-579393 -n ha-579393: exit status 2 (307.03933ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "ha-579393" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (1.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (1.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-579393 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-579393 stop --alsologtostderr -v 5: (1.220011345s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-579393 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-579393 status --alsologtostderr -v 5: exit status 7 (69.528966ms)

                                                
                                                
-- stdout --
	ha-579393
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 20:20:24.736138  489842 out.go:360] Setting OutFile to fd 1 ...
	I1014 20:20:24.736385  489842 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:20:24.736393  489842 out.go:374] Setting ErrFile to fd 2...
	I1014 20:20:24.736396  489842 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:20:24.736599  489842 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-413763/.minikube/bin
	I1014 20:20:24.736839  489842 out.go:368] Setting JSON to false
	I1014 20:20:24.736876  489842 mustload.go:65] Loading cluster: ha-579393
	I1014 20:20:24.737103  489842 notify.go:220] Checking for updates...
	I1014 20:20:24.737256  489842 config.go:182] Loaded profile config "ha-579393": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:20:24.737272  489842 status.go:174] checking status of ha-579393 ...
	I1014 20:20:24.737756  489842 cli_runner.go:164] Run: docker container inspect ha-579393 --format={{.State.Status}}
	I1014 20:20:24.756235  489842 status.go:371] ha-579393 host status = "Stopped" (err=<nil>)
	I1014 20:20:24.756309  489842 status.go:384] host is not running, skipping remaining checks
	I1014 20:20:24.756317  489842 status.go:176] ha-579393 status: &{Name:ha-579393 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:545: status says not two control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-579393 status --alsologtostderr -v 5": ha-579393
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:551: status says not three kubelets are stopped: args "out/minikube-linux-amd64 -p ha-579393 status --alsologtostderr -v 5": ha-579393
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:554: status says not two apiservers are stopped: args "out/minikube-linux-amd64 -p ha-579393 status --alsologtostderr -v 5": ha-579393
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-579393
helpers_test.go:243: (dbg) docker inspect ha-579393:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2",
	        "Created": "2025-10-14T20:03:22.416166993Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "exited",
	            "Running": false,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 0,
	            "ExitCode": 130,
	            "Error": "",
	            "StartedAt": "2025-10-14T20:14:11.163807444Z",
	            "FinishedAt": "2025-10-14T20:20:23.81110806Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2/hostname",
	        "HostsPath": "/var/lib/docker/containers/e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2/hosts",
	        "LogPath": "/var/lib/docker/containers/e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2/e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2-json.log",
	        "Name": "/ha-579393",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-579393:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-579393",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2",
	                "LowerDir": "/var/lib/docker/overlay2/f2862f7f555e5163c5fb738e59e646f87e5b6f6dc38621bbd5cf8a3ccf2dc40d-init/diff:/var/lib/docker/overlay2/51cca15559205c73df9571f03495ed8a29f085405673af5fef2c1ba7c695d8af/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f2862f7f555e5163c5fb738e59e646f87e5b6f6dc38621bbd5cf8a3ccf2dc40d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f2862f7f555e5163c5fb738e59e646f87e5b6f6dc38621bbd5cf8a3ccf2dc40d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f2862f7f555e5163c5fb738e59e646f87e5b6f6dc38621bbd5cf8a3ccf2dc40d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-579393",
	                "Source": "/var/lib/docker/volumes/ha-579393/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-579393",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-579393",
	                "name.minikube.sigs.k8s.io": "ha-579393",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "",
	            "SandboxKey": "",
	            "Ports": {},
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-579393": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d686e445c44d566d468791372c74214f3cfc87adaf2fbedd68822ff7d3ce8955",
	                    "EndpointID": "",
	                    "Gateway": "",
	                    "IPAddress": "",
	                    "IPPrefixLen": 0,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-579393",
	                        "e999454f4483"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-579393 -n ha-579393
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-579393 -n ha-579393: exit status 7 (70.840324ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 7 (may be ok)
helpers_test.go:249: "ha-579393" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (1.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (368.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-579393 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1014 20:20:35.878737  417373 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:24:12.799592  417373 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-579393 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: exit status 80 (6m7.07933786s)

                                                
                                                
-- stdout --
	* [ha-579393] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21409
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21409-413763/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-413763/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "ha-579393" primary control-plane node in "ha-579393" cluster
	* Pulling base image v0.0.48-1759745255-21703 ...
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 20:20:24.896990  489898 out.go:360] Setting OutFile to fd 1 ...
	I1014 20:20:24.897284  489898 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:20:24.897296  489898 out.go:374] Setting ErrFile to fd 2...
	I1014 20:20:24.897302  489898 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:20:24.897542  489898 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-413763/.minikube/bin
	I1014 20:20:24.898101  489898 out.go:368] Setting JSON to false
	I1014 20:20:24.899124  489898 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":10971,"bootTime":1760462254,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1014 20:20:24.899246  489898 start.go:141] virtualization: kvm guest
	I1014 20:20:24.901656  489898 out.go:179] * [ha-579393] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1014 20:20:24.903433  489898 notify.go:220] Checking for updates...
	I1014 20:20:24.903468  489898 out.go:179]   - MINIKUBE_LOCATION=21409
	I1014 20:20:24.905313  489898 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 20:20:24.907144  489898 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-413763/kubeconfig
	I1014 20:20:24.908684  489898 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-413763/.minikube
	I1014 20:20:24.910248  489898 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1014 20:20:24.911693  489898 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 20:20:24.913427  489898 config.go:182] Loaded profile config "ha-579393": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:20:24.913995  489898 driver.go:421] Setting default libvirt URI to qemu:///system
	I1014 20:20:24.938824  489898 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1014 20:20:24.939019  489898 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 20:20:25.007228  489898 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-14 20:20:24.996574045 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1014 20:20:25.007350  489898 docker.go:318] overlay module found
	I1014 20:20:25.010286  489898 out.go:179] * Using the docker driver based on existing profile
	I1014 20:20:25.011792  489898 start.go:305] selected driver: docker
	I1014 20:20:25.011819  489898 start.go:925] validating driver "docker" against &{Name:ha-579393 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-579393 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 20:20:25.011930  489898 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 20:20:25.012031  489898 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 20:20:25.072380  489898 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-14 20:20:25.062155128 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1014 20:20:25.073231  489898 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 20:20:25.073267  489898 cni.go:84] Creating CNI manager for ""
	I1014 20:20:25.073308  489898 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1014 20:20:25.073364  489898 start.go:349] cluster config:
	{Name:ha-579393 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-579393 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GP
Us: AutoPauseInterval:1m0s}
	I1014 20:20:25.075991  489898 out.go:179] * Starting "ha-579393" primary control-plane node in "ha-579393" cluster
	I1014 20:20:25.077302  489898 cache.go:123] Beginning downloading kic base image for docker with crio
	I1014 20:20:25.079216  489898 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1014 20:20:25.080637  489898 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 20:20:25.080691  489898 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-413763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1014 20:20:25.080700  489898 cache.go:58] Caching tarball of preloaded images
	I1014 20:20:25.080769  489898 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1014 20:20:25.080800  489898 preload.go:233] Found /home/jenkins/minikube-integration/21409-413763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1014 20:20:25.080809  489898 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1014 20:20:25.080900  489898 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/config.json ...
	I1014 20:20:25.101692  489898 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1014 20:20:25.101730  489898 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1014 20:20:25.101767  489898 cache.go:232] Successfully downloaded all kic artifacts
	I1014 20:20:25.101806  489898 start.go:360] acquireMachinesLock for ha-579393: {Name:mk1f05a584fb3418ecbc78031a467c0e06c7a892 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 20:20:25.101915  489898 start.go:364] duration metric: took 60.146µs to acquireMachinesLock for "ha-579393"
	I1014 20:20:25.101941  489898 start.go:96] Skipping create...Using existing machine configuration
	I1014 20:20:25.101949  489898 fix.go:54] fixHost starting: 
	I1014 20:20:25.102193  489898 cli_runner.go:164] Run: docker container inspect ha-579393 --format={{.State.Status}}
	I1014 20:20:25.120107  489898 fix.go:112] recreateIfNeeded on ha-579393: state=Stopped err=<nil>
	W1014 20:20:25.120156  489898 fix.go:138] unexpected machine state, will restart: <nil>
	I1014 20:20:25.122045  489898 out.go:252] * Restarting existing docker container for "ha-579393" ...
	I1014 20:20:25.122122  489898 cli_runner.go:164] Run: docker start ha-579393
	I1014 20:20:25.362928  489898 cli_runner.go:164] Run: docker container inspect ha-579393 --format={{.State.Status}}
	I1014 20:20:25.381546  489898 kic.go:430] container "ha-579393" state is running.
	I1014 20:20:25.382015  489898 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-579393
	I1014 20:20:25.401922  489898 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/config.json ...
	I1014 20:20:25.402188  489898 machine.go:93] provisionDockerMachine start ...
	I1014 20:20:25.402264  489898 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:20:25.421465  489898 main.go:141] libmachine: Using SSH client type: native
	I1014 20:20:25.421725  489898 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32913 <nil> <nil>}
	I1014 20:20:25.421746  489898 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 20:20:25.422396  489898 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40524->127.0.0.1:32913: read: connection reset by peer
	I1014 20:20:28.574789  489898 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-579393
	
	I1014 20:20:28.574824  489898 ubuntu.go:182] provisioning hostname "ha-579393"
	I1014 20:20:28.574892  489898 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:20:28.593311  489898 main.go:141] libmachine: Using SSH client type: native
	I1014 20:20:28.593527  489898 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32913 <nil> <nil>}
	I1014 20:20:28.593539  489898 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-579393 && echo "ha-579393" | sudo tee /etc/hostname
	I1014 20:20:28.751227  489898 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-579393
	
	I1014 20:20:28.751331  489898 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:20:28.771390  489898 main.go:141] libmachine: Using SSH client type: native
	I1014 20:20:28.771598  489898 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32913 <nil> <nil>}
	I1014 20:20:28.771614  489898 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-579393' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-579393/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-579393' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 20:20:28.919191  489898 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 20:20:28.919232  489898 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-413763/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-413763/.minikube}
	I1014 20:20:28.919288  489898 ubuntu.go:190] setting up certificates
	I1014 20:20:28.919304  489898 provision.go:84] configureAuth start
	I1014 20:20:28.919374  489898 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-579393
	I1014 20:20:28.937555  489898 provision.go:143] copyHostCerts
	I1014 20:20:28.937600  489898 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem
	I1014 20:20:28.937642  489898 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem, removing ...
	I1014 20:20:28.937656  489898 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem
	I1014 20:20:28.937748  489898 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem (1078 bytes)
	I1014 20:20:28.938006  489898 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem
	I1014 20:20:28.938042  489898 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem, removing ...
	I1014 20:20:28.938054  489898 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem
	I1014 20:20:28.938107  489898 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem (1123 bytes)
	I1014 20:20:28.938179  489898 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem
	I1014 20:20:28.938205  489898 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem, removing ...
	I1014 20:20:28.938214  489898 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem
	I1014 20:20:28.938254  489898 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem (1675 bytes)
	I1014 20:20:28.938327  489898 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca-key.pem org=jenkins.ha-579393 san=[127.0.0.1 192.168.49.2 ha-579393 localhost minikube]
	I1014 20:20:28.984141  489898 provision.go:177] copyRemoteCerts
	I1014 20:20:28.984206  489898 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 20:20:28.984251  489898 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:20:29.002844  489898 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:20:29.106575  489898 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1014 20:20:29.106640  489898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1014 20:20:29.125278  489898 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1014 20:20:29.125393  489898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 20:20:29.144458  489898 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1014 20:20:29.144532  489898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1014 20:20:29.163285  489898 provision.go:87] duration metric: took 243.963585ms to configureAuth
	I1014 20:20:29.163319  489898 ubuntu.go:206] setting minikube options for container-runtime
	I1014 20:20:29.163543  489898 config.go:182] Loaded profile config "ha-579393": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:20:29.163679  489898 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:20:29.182115  489898 main.go:141] libmachine: Using SSH client type: native
	I1014 20:20:29.182329  489898 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32913 <nil> <nil>}
	I1014 20:20:29.182344  489898 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 20:20:29.446950  489898 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 20:20:29.446980  489898 machine.go:96] duration metric: took 4.044773675s to provisionDockerMachine
	I1014 20:20:29.446995  489898 start.go:293] postStartSetup for "ha-579393" (driver="docker")
	I1014 20:20:29.447007  489898 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 20:20:29.447058  489898 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 20:20:29.447097  489898 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:20:29.465397  489898 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:20:29.570738  489898 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 20:20:29.574668  489898 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1014 20:20:29.574700  489898 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1014 20:20:29.574712  489898 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-413763/.minikube/addons for local assets ...
	I1014 20:20:29.574793  489898 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-413763/.minikube/files for local assets ...
	I1014 20:20:29.574907  489898 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem -> 4173732.pem in /etc/ssl/certs
	I1014 20:20:29.574923  489898 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem -> /etc/ssl/certs/4173732.pem
	I1014 20:20:29.575031  489898 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 20:20:29.583269  489898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem --> /etc/ssl/certs/4173732.pem (1708 bytes)
	I1014 20:20:29.600614  489898 start.go:296] duration metric: took 153.60445ms for postStartSetup
	I1014 20:20:29.600725  489898 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1014 20:20:29.600803  489898 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:20:29.618422  489898 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:20:29.719349  489898 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1014 20:20:29.724065  489898 fix.go:56] duration metric: took 4.622108754s for fixHost
	I1014 20:20:29.724091  489898 start.go:83] releasing machines lock for "ha-579393", held for 4.622163128s
	I1014 20:20:29.724158  489898 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-579393
	I1014 20:20:29.742263  489898 ssh_runner.go:195] Run: cat /version.json
	I1014 20:20:29.742292  489898 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 20:20:29.742312  489898 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:20:29.742360  489898 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:20:29.760806  489898 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:20:29.761892  489898 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:20:29.861522  489898 ssh_runner.go:195] Run: systemctl --version
	I1014 20:20:29.920675  489898 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 20:20:29.958043  489898 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 20:20:29.963013  489898 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 20:20:29.963081  489898 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 20:20:29.971684  489898 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1014 20:20:29.971715  489898 start.go:495] detecting cgroup driver to use...
	I1014 20:20:29.971777  489898 detect.go:190] detected "systemd" cgroup driver on host os
	I1014 20:20:29.971827  489898 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 20:20:29.986651  489898 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 20:20:29.999493  489898 docker.go:218] disabling cri-docker service (if available) ...
	I1014 20:20:29.999555  489898 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 20:20:30.014987  489898 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 20:20:30.028206  489898 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 20:20:30.108561  489898 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 20:20:30.189013  489898 docker.go:234] disabling docker service ...
	I1014 20:20:30.189092  489898 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 20:20:30.205263  489898 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 20:20:30.218011  489898 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 20:20:30.297456  489898 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 20:20:30.378372  489898 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 20:20:30.391541  489898 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 20:20:30.406068  489898 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1014 20:20:30.406139  489898 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:20:30.415378  489898 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1014 20:20:30.415458  489898 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:20:30.425041  489898 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:20:30.434283  489898 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:20:30.443270  489898 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 20:20:30.451367  489898 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:20:30.460460  489898 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:20:30.469171  489898 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:20:30.478459  489898 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 20:20:30.486229  489898 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 20:20:30.493996  489898 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 20:20:30.573307  489898 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 20:20:30.683147  489898 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 20:20:30.683209  489898 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 20:20:30.687341  489898 start.go:563] Will wait 60s for crictl version
	I1014 20:20:30.687394  489898 ssh_runner.go:195] Run: which crictl
	I1014 20:20:30.690908  489898 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1014 20:20:30.716598  489898 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1014 20:20:30.716668  489898 ssh_runner.go:195] Run: crio --version
	I1014 20:20:30.746004  489898 ssh_runner.go:195] Run: crio --version
	I1014 20:20:30.777705  489898 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1014 20:20:30.778957  489898 cli_runner.go:164] Run: docker network inspect ha-579393 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1014 20:20:30.796976  489898 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1014 20:20:30.801378  489898 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 20:20:30.812065  489898 kubeadm.go:883] updating cluster {Name:ha-579393 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-579393 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClien
tPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 20:20:30.812194  489898 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 20:20:30.812256  489898 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 20:20:30.843803  489898 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 20:20:30.843825  489898 crio.go:433] Images already preloaded, skipping extraction
	I1014 20:20:30.843871  489898 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 20:20:30.870297  489898 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 20:20:30.870318  489898 cache_images.go:85] Images are preloaded, skipping loading
	I1014 20:20:30.870326  489898 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1014 20:20:30.870413  489898 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-579393 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-579393 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 20:20:30.870472  489898 ssh_runner.go:195] Run: crio config
	I1014 20:20:30.916212  489898 cni.go:84] Creating CNI manager for ""
	I1014 20:20:30.916239  489898 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1014 20:20:30.916269  489898 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1014 20:20:30.916293  489898 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-579393 NodeName:ha-579393 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1014 20:20:30.916410  489898 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-579393"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 20:20:30.916472  489898 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1014 20:20:30.925261  489898 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 20:20:30.925338  489898 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1014 20:20:30.933417  489898 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1014 20:20:30.946346  489898 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 20:20:30.959345  489898 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1014 20:20:30.972547  489898 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1014 20:20:30.976536  489898 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 20:20:30.987410  489898 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 20:20:31.067104  489898 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 20:20:31.089513  489898 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393 for IP: 192.168.49.2
	I1014 20:20:31.089537  489898 certs.go:195] generating shared ca certs ...
	I1014 20:20:31.089557  489898 certs.go:227] acquiring lock for ca certs: {Name:mk332cc83f1e1747534fc81e896bed94f24941d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:20:31.089728  489898 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.key
	I1014 20:20:31.089804  489898 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.key
	I1014 20:20:31.089820  489898 certs.go:257] generating profile certs ...
	I1014 20:20:31.089945  489898 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/client.key
	I1014 20:20:31.090021  489898 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key.d4ee92c1
	I1014 20:20:31.090072  489898 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.key
	I1014 20:20:31.090088  489898 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1014 20:20:31.090106  489898 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1014 20:20:31.090118  489898 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1014 20:20:31.090131  489898 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1014 20:20:31.090142  489898 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1014 20:20:31.090156  489898 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1014 20:20:31.090168  489898 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1014 20:20:31.090182  489898 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1014 20:20:31.090241  489898 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/417373.pem (1338 bytes)
	W1014 20:20:31.090277  489898 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-413763/.minikube/certs/417373_empty.pem, impossibly tiny 0 bytes
	I1014 20:20:31.090288  489898 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca-key.pem (1675 bytes)
	I1014 20:20:31.090313  489898 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem (1078 bytes)
	I1014 20:20:31.090343  489898 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem (1123 bytes)
	I1014 20:20:31.090372  489898 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem (1675 bytes)
	I1014 20:20:31.090421  489898 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem (1708 bytes)
	I1014 20:20:31.090453  489898 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem -> /usr/share/ca-certificates/4173732.pem
	I1014 20:20:31.090470  489898 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:20:31.090487  489898 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/417373.pem -> /usr/share/ca-certificates/417373.pem
	I1014 20:20:31.091297  489898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 20:20:31.111369  489898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1014 20:20:31.131215  489898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 20:20:31.152691  489898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1014 20:20:31.177685  489898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1014 20:20:31.197344  489898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1014 20:20:31.216500  489898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 20:20:31.234564  489898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1014 20:20:31.252166  489898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem --> /usr/share/ca-certificates/4173732.pem (1708 bytes)
	I1014 20:20:31.269606  489898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 20:20:31.288166  489898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/certs/417373.pem --> /usr/share/ca-certificates/417373.pem (1338 bytes)
	I1014 20:20:31.305894  489898 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 20:20:31.318425  489898 ssh_runner.go:195] Run: openssl version
	I1014 20:20:31.324791  489898 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4173732.pem && ln -fs /usr/share/ca-certificates/4173732.pem /etc/ssl/certs/4173732.pem"
	I1014 20:20:31.333410  489898 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4173732.pem
	I1014 20:20:31.337628  489898 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 19:32 /usr/share/ca-certificates/4173732.pem
	I1014 20:20:31.337704  489898 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4173732.pem
	I1014 20:20:31.372321  489898 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4173732.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 20:20:31.381116  489898 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 20:20:31.390138  489898 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:20:31.394052  489898 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 19:15 /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:20:31.394109  489898 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:20:31.429938  489898 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 20:20:31.438655  489898 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/417373.pem && ln -fs /usr/share/ca-certificates/417373.pem /etc/ssl/certs/417373.pem"
	I1014 20:20:31.447298  489898 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/417373.pem
	I1014 20:20:31.451279  489898 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 19:32 /usr/share/ca-certificates/417373.pem
	I1014 20:20:31.451343  489898 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/417373.pem
	I1014 20:20:31.485062  489898 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/417373.pem /etc/ssl/certs/51391683.0"
	I1014 20:20:31.493976  489898 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 20:20:31.498163  489898 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1014 20:20:31.532437  489898 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1014 20:20:31.569216  489898 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1014 20:20:31.605892  489898 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1014 20:20:31.653534  489898 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1014 20:20:31.690955  489898 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1014 20:20:31.725979  489898 kubeadm.go:400] StartCluster: {Name:ha-579393 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-579393 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPa
th: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 20:20:31.726143  489898 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 20:20:31.726202  489898 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 20:20:31.755641  489898 cri.go:89] found id: ""
	I1014 20:20:31.755728  489898 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 20:20:31.764571  489898 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1014 20:20:31.764596  489898 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1014 20:20:31.764641  489898 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1014 20:20:31.772544  489898 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1014 20:20:31.772997  489898 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-579393" does not appear in /home/jenkins/minikube-integration/21409-413763/kubeconfig
	I1014 20:20:31.773109  489898 kubeconfig.go:62] /home/jenkins/minikube-integration/21409-413763/kubeconfig needs updating (will repair): [kubeconfig missing "ha-579393" cluster setting kubeconfig missing "ha-579393" context setting]
	I1014 20:20:31.773353  489898 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/kubeconfig: {Name:mk8f52e50cf00a1f90024c40a36e49753857e33b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:20:31.773843  489898 kapi.go:59] client config for ha-579393: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/client.crt", KeyFile:"/home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/client.key", CAFile:"/home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1014 20:20:31.774269  489898 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1014 20:20:31.774283  489898 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1014 20:20:31.774287  489898 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1014 20:20:31.774291  489898 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1014 20:20:31.774297  489898 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1014 20:20:31.774312  489898 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1014 20:20:31.774673  489898 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1014 20:20:31.783543  489898 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1014 20:20:31.783582  489898 kubeadm.go:601] duration metric: took 18.979903ms to restartPrimaryControlPlane
	I1014 20:20:31.783595  489898 kubeadm.go:402] duration metric: took 57.628352ms to StartCluster
	I1014 20:20:31.783616  489898 settings.go:142] acquiring lock: {Name:mke99e63954bc0385c76f9fa1a80091fa7740a6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:20:31.783711  489898 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21409-413763/kubeconfig
	I1014 20:20:31.784245  489898 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/kubeconfig: {Name:mk8f52e50cf00a1f90024c40a36e49753857e33b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:20:31.784483  489898 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 20:20:31.784537  489898 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1014 20:20:31.784634  489898 addons.go:69] Setting storage-provisioner=true in profile "ha-579393"
	I1014 20:20:31.784650  489898 addons.go:69] Setting default-storageclass=true in profile "ha-579393"
	I1014 20:20:31.784678  489898 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-579393"
	I1014 20:20:31.784687  489898 config.go:182] Loaded profile config "ha-579393": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:20:31.784656  489898 addons.go:238] Setting addon storage-provisioner=true in "ha-579393"
	I1014 20:20:31.784839  489898 host.go:66] Checking if "ha-579393" exists ...
	I1014 20:20:31.784988  489898 cli_runner.go:164] Run: docker container inspect ha-579393 --format={{.State.Status}}
	I1014 20:20:31.785316  489898 cli_runner.go:164] Run: docker container inspect ha-579393 --format={{.State.Status}}
	I1014 20:20:31.789929  489898 out.go:179] * Verifying Kubernetes components...
	I1014 20:20:31.791591  489898 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 20:20:31.805965  489898 kapi.go:59] client config for ha-579393: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/client.crt", KeyFile:"/home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/client.key", CAFile:"/home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1014 20:20:31.806386  489898 addons.go:238] Setting addon default-storageclass=true in "ha-579393"
	I1014 20:20:31.806441  489898 host.go:66] Checking if "ha-579393" exists ...
	I1014 20:20:31.806931  489898 cli_runner.go:164] Run: docker container inspect ha-579393 --format={{.State.Status}}
	I1014 20:20:31.807584  489898 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 20:20:31.809119  489898 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 20:20:31.809148  489898 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1014 20:20:31.809214  489898 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:20:31.832877  489898 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1014 20:20:31.832915  489898 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1014 20:20:31.832999  489898 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:20:31.836985  489898 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:20:31.854396  489898 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:20:31.900722  489898 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 20:20:31.915259  489898 node_ready.go:35] waiting up to 6m0s for node "ha-579393" to be "Ready" ...
	I1014 20:20:31.948248  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 20:20:31.965203  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:20:32.010301  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:32.010345  489898 retry.go:31] will retry after 180.735659ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:20:32.026606  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:32.026655  489898 retry.go:31] will retry after 185.14299ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:32.191908  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 20:20:32.212727  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:20:32.261347  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:32.261379  489898 retry.go:31] will retry after 400.487372ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:20:32.273847  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:32.273879  489898 retry.go:31] will retry after 332.539123ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:32.606897  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:20:32.660842  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:32.660884  489898 retry.go:31] will retry after 506.115799ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:32.662966  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1014 20:20:32.717555  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:32.717593  489898 retry.go:31] will retry after 698.279488ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:33.167777  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:20:33.223185  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:33.223218  489898 retry.go:31] will retry after 929.627856ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:33.416016  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1014 20:20:33.471972  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:33.472005  489898 retry.go:31] will retry after 760.905339ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:20:33.916053  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:20:34.153507  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:20:34.208070  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:34.208106  489898 retry.go:31] will retry after 1.612829525s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:34.233328  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1014 20:20:34.287658  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:34.287702  489898 retry.go:31] will retry after 818.99186ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:35.107035  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1014 20:20:35.161369  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:35.161406  489898 retry.go:31] will retry after 2.372177473s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:35.821805  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:20:35.876422  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:35.876462  489898 retry.go:31] will retry after 1.76203735s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:20:36.416224  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:20:37.533877  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1014 20:20:37.589802  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:37.589836  489898 retry.go:31] will retry after 2.151742617s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:37.639147  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:20:37.694173  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:37.694209  489898 retry.go:31] will retry after 2.414165218s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:20:38.916349  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:20:39.741973  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1014 20:20:39.798810  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:39.798851  489898 retry.go:31] will retry after 6.380239181s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:40.109367  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:20:40.165446  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:40.165488  489898 retry.go:31] will retry after 4.273629229s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:20:40.916572  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:20:43.416160  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:20:44.439617  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:20:44.495805  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:44.495847  489898 retry.go:31] will retry after 5.884728712s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:20:45.916420  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:20:46.179913  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1014 20:20:46.236772  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:46.236810  489898 retry.go:31] will retry after 6.359293031s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:20:48.416258  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:20:50.381581  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:20:50.416856  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:20:50.439004  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:50.439036  489898 retry.go:31] will retry after 11.771270745s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:52.597189  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1014 20:20:52.652445  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:52.652476  489898 retry.go:31] will retry after 10.720617277s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:20:52.916399  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:20:54.916966  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:20:57.416509  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:20:59.416864  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:21:01.916327  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:21:02.210789  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:21:02.266987  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:21:02.267021  489898 retry.go:31] will retry after 17.660934523s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:21:03.373440  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1014 20:21:03.428855  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:21:03.428886  489898 retry.go:31] will retry after 19.842704585s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:21:04.416008  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:21:06.416555  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:21:08.916547  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:21:11.416206  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:21:13.416608  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:21:15.916358  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:21:18.416349  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:21:19.929156  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:21:19.984438  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:21:19.984472  489898 retry.go:31] will retry after 17.500549438s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:21:20.416573  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:21:22.916397  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:21:23.271863  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1014 20:21:23.329260  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:21:23.329293  489898 retry.go:31] will retry after 15.097428161s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:21:24.916706  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:21:27.416721  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:21:29.916916  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:21:32.416582  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:21:34.916674  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:21:37.416493  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:21:37.485708  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:21:37.543070  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:21:37.543103  489898 retry.go:31] will retry after 40.949070497s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:21:38.427486  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1014 20:21:38.483097  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:21:38.483131  489898 retry.go:31] will retry after 43.966081483s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:21:39.916663  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:21:42.416063  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:21:44.416626  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:21:46.916599  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:21:49.416540  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:21:51.916813  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:21:54.416366  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:21:56.916158  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:21:58.916828  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:22:01.416335  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:22:03.916080  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:22:05.916821  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:22:08.416505  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:22:10.916302  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:22:13.416113  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:22:15.416873  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:22:17.915856  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:22:18.493252  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:22:18.550465  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:22:18.550634  489898 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1014 20:22:19.916885  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:22:22.415932  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:22:22.450166  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1014 20:22:22.505420  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:22:22.505542  489898 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1014 20:22:22.507884  489898 out.go:179] * Enabled addons: 
	I1014 20:22:22.509381  489898 addons.go:514] duration metric: took 1m50.724843787s for enable addons: enabled=[]
	W1014 20:22:24.416034  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:22:26.416184  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:22:28.416951  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:22:30.915926  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:22:32.916501  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:22:35.416234  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:22:37.916248  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:22:40.416105  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:22:42.416850  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:22:44.916913  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:22:47.416908  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:22:49.916150  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:22:52.416224  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:22:54.916288  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:22:57.416136  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:22:59.916282  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:23:02.416423  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:23:04.916846  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:23:07.416742  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:23:09.916646  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:23:12.416492  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:23:14.916627  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:23:17.416573  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:23:19.916979  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:23:22.416907  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:23:24.916676  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:23:27.416100  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:23:29.416688  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:23:31.916476  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:23:33.916684  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:23:36.415978  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:23:38.416324  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:23:40.916275  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:23:43.416374  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:23:45.916584  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:23:48.416514  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:23:50.916574  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:23:53.416488  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:23:55.916368  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:23:58.416228  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:00.916323  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:03.416241  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:05.916163  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:08.416132  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:10.416813  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:12.916784  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:15.416869  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:17.916799  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:20.416685  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:22.916800  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:25.416843  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:27.916316  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:29.916868  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:32.416204  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:34.416320  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:36.416834  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:38.916212  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:40.916807  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:43.416048  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:45.916074  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:47.916191  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:49.916569  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:52.415930  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:54.916217  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:57.415888  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:59.416062  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:01.416475  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:03.916023  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:05.916303  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:08.416300  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:10.416919  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:12.916244  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:14.916535  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:16.916873  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:19.416132  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:21.416381  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:23.915957  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:25.916320  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:28.416305  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:30.416865  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:32.916038  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:34.916413  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:37.416363  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:39.916738  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:42.416168  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:44.916517  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:47.416590  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:49.416883  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:51.916394  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:54.416277  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:56.416473  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:58.916319  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:26:01.415934  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:26:03.416196  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:26:05.416655  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:26:07.916910  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:26:10.416263  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:26:12.416368  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:26:14.916632  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:26:16.916923  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:26:19.415911  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:26:21.416220  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:26:23.416304  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:26:25.416816  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:26:27.916321  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:26:29.916832  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:26:31.915871  489898 node_ready.go:38] duration metric: took 6m0.000553348s for node "ha-579393" to be "Ready" ...
	I1014 20:26:31.918599  489898 out.go:203] 
	W1014 20:26:31.920009  489898 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1014 20:26:31.920031  489898 out.go:285] * 
	* 
	W1014 20:26:31.921790  489898 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 20:26:31.923205  489898 out.go:203] 

                                                
                                                
** /stderr **
ha_test.go:564: failed to start cluster. args "out/minikube-linux-amd64 -p ha-579393 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-579393
helpers_test.go:243: (dbg) docker inspect ha-579393:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2",
	        "Created": "2025-10-14T20:03:22.416166993Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 490098,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-14T20:20:25.14898301Z",
	            "FinishedAt": "2025-10-14T20:20:23.81110806Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2/hostname",
	        "HostsPath": "/var/lib/docker/containers/e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2/hosts",
	        "LogPath": "/var/lib/docker/containers/e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2/e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2-json.log",
	        "Name": "/ha-579393",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-579393:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-579393",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2",
	                "LowerDir": "/var/lib/docker/overlay2/f2862f7f555e5163c5fb738e59e646f87e5b6f6dc38621bbd5cf8a3ccf2dc40d-init/diff:/var/lib/docker/overlay2/51cca15559205c73df9571f03495ed8a29f085405673af5fef2c1ba7c695d8af/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f2862f7f555e5163c5fb738e59e646f87e5b6f6dc38621bbd5cf8a3ccf2dc40d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f2862f7f555e5163c5fb738e59e646f87e5b6f6dc38621bbd5cf8a3ccf2dc40d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f2862f7f555e5163c5fb738e59e646f87e5b6f6dc38621bbd5cf8a3ccf2dc40d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-579393",
	                "Source": "/var/lib/docker/volumes/ha-579393/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-579393",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-579393",
	                "name.minikube.sigs.k8s.io": "ha-579393",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c02e504f33501e83f3b9b4187e2f5d221a1e738b5d0f6faf24795ae2990234ba",
	            "SandboxKey": "/var/run/docker/netns/c02e504f3350",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32913"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32914"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32917"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32915"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32916"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-579393": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "42:54:39:d3:9a:de",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d686e445c44d566d468791372c74214f3cfc87adaf2fbedd68822ff7d3ce8955",
	                    "EndpointID": "aaae67063c867134e15c0950594bd5a6f0ea17d0626d59a73f395e78fd0d78e2",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-579393",
	                        "e999454f4483"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-579393 -n ha-579393
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-579393 -n ha-579393: exit status 2 (311.63064ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/RestartCluster FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-579393 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/RestartCluster logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                             ARGS                                             │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ kubectl │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:12 UTC │                     │
	│ kubectl │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:12 UTC │                     │
	│ kubectl │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:12 UTC │                     │
	│ kubectl │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ kubectl │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                        │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ kubectl │ ha-579393 kubectl -- exec  -- nslookup kubernetes.io                                         │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ kubectl │ ha-579393 kubectl -- exec  -- nslookup kubernetes.default                                    │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ kubectl │ ha-579393 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                  │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ kubectl │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                        │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ node    │ ha-579393 node add --alsologtostderr -v 5                                                    │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ node    │ ha-579393 node stop m02 --alsologtostderr -v 5                                               │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ node    │ ha-579393 node start m02 --alsologtostderr -v 5                                              │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ node    │ ha-579393 node list --alsologtostderr -v 5                                                   │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:14 UTC │                     │
	│ stop    │ ha-579393 stop --alsologtostderr -v 5                                                        │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:14 UTC │ 14 Oct 25 20:14 UTC │
	│ start   │ ha-579393 start --wait true --alsologtostderr -v 5                                           │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:14 UTC │                     │
	│ node    │ ha-579393 node list --alsologtostderr -v 5                                                   │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:20 UTC │                     │
	│ node    │ ha-579393 node delete m03 --alsologtostderr -v 5                                             │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:20 UTC │                     │
	│ stop    │ ha-579393 stop --alsologtostderr -v 5                                                        │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:20 UTC │ 14 Oct 25 20:20 UTC │
	│ start   │ ha-579393 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:20 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/14 20:20:24
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1014 20:20:24.896990  489898 out.go:360] Setting OutFile to fd 1 ...
	I1014 20:20:24.897284  489898 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:20:24.897296  489898 out.go:374] Setting ErrFile to fd 2...
	I1014 20:20:24.897302  489898 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:20:24.897542  489898 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-413763/.minikube/bin
	I1014 20:20:24.898101  489898 out.go:368] Setting JSON to false
	I1014 20:20:24.899124  489898 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":10971,"bootTime":1760462254,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1014 20:20:24.899246  489898 start.go:141] virtualization: kvm guest
	I1014 20:20:24.901656  489898 out.go:179] * [ha-579393] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1014 20:20:24.903433  489898 notify.go:220] Checking for updates...
	I1014 20:20:24.903468  489898 out.go:179]   - MINIKUBE_LOCATION=21409
	I1014 20:20:24.905313  489898 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 20:20:24.907144  489898 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-413763/kubeconfig
	I1014 20:20:24.908684  489898 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-413763/.minikube
	I1014 20:20:24.910248  489898 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1014 20:20:24.911693  489898 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 20:20:24.913427  489898 config.go:182] Loaded profile config "ha-579393": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:20:24.913995  489898 driver.go:421] Setting default libvirt URI to qemu:///system
	I1014 20:20:24.938824  489898 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1014 20:20:24.939019  489898 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 20:20:25.007228  489898 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-14 20:20:24.996574045 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1014 20:20:25.007350  489898 docker.go:318] overlay module found
	I1014 20:20:25.010286  489898 out.go:179] * Using the docker driver based on existing profile
	I1014 20:20:25.011792  489898 start.go:305] selected driver: docker
	I1014 20:20:25.011819  489898 start.go:925] validating driver "docker" against &{Name:ha-579393 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-579393 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 20:20:25.011930  489898 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 20:20:25.012031  489898 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 20:20:25.072380  489898 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-14 20:20:25.062155128 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1014 20:20:25.073231  489898 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 20:20:25.073267  489898 cni.go:84] Creating CNI manager for ""
	I1014 20:20:25.073308  489898 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1014 20:20:25.073364  489898 start.go:349] cluster config:
	{Name:ha-579393 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-579393 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GP
Us: AutoPauseInterval:1m0s}
	I1014 20:20:25.075991  489898 out.go:179] * Starting "ha-579393" primary control-plane node in "ha-579393" cluster
	I1014 20:20:25.077302  489898 cache.go:123] Beginning downloading kic base image for docker with crio
	I1014 20:20:25.079216  489898 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1014 20:20:25.080637  489898 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 20:20:25.080691  489898 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-413763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1014 20:20:25.080700  489898 cache.go:58] Caching tarball of preloaded images
	I1014 20:20:25.080769  489898 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1014 20:20:25.080800  489898 preload.go:233] Found /home/jenkins/minikube-integration/21409-413763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1014 20:20:25.080809  489898 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1014 20:20:25.080900  489898 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/config.json ...
	I1014 20:20:25.101692  489898 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1014 20:20:25.101730  489898 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1014 20:20:25.101767  489898 cache.go:232] Successfully downloaded all kic artifacts
	I1014 20:20:25.101806  489898 start.go:360] acquireMachinesLock for ha-579393: {Name:mk1f05a584fb3418ecbc78031a467c0e06c7a892 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 20:20:25.101915  489898 start.go:364] duration metric: took 60.146µs to acquireMachinesLock for "ha-579393"
	I1014 20:20:25.101941  489898 start.go:96] Skipping create...Using existing machine configuration
	I1014 20:20:25.101949  489898 fix.go:54] fixHost starting: 
	I1014 20:20:25.102193  489898 cli_runner.go:164] Run: docker container inspect ha-579393 --format={{.State.Status}}
	I1014 20:20:25.120107  489898 fix.go:112] recreateIfNeeded on ha-579393: state=Stopped err=<nil>
	W1014 20:20:25.120156  489898 fix.go:138] unexpected machine state, will restart: <nil>
	I1014 20:20:25.122045  489898 out.go:252] * Restarting existing docker container for "ha-579393" ...
	I1014 20:20:25.122122  489898 cli_runner.go:164] Run: docker start ha-579393
	I1014 20:20:25.362928  489898 cli_runner.go:164] Run: docker container inspect ha-579393 --format={{.State.Status}}
	I1014 20:20:25.381546  489898 kic.go:430] container "ha-579393" state is running.
	I1014 20:20:25.382015  489898 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-579393
	I1014 20:20:25.401922  489898 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/config.json ...
	I1014 20:20:25.402188  489898 machine.go:93] provisionDockerMachine start ...
	I1014 20:20:25.402264  489898 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:20:25.421465  489898 main.go:141] libmachine: Using SSH client type: native
	I1014 20:20:25.421725  489898 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32913 <nil> <nil>}
	I1014 20:20:25.421746  489898 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 20:20:25.422396  489898 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40524->127.0.0.1:32913: read: connection reset by peer
	I1014 20:20:28.574789  489898 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-579393
	
	I1014 20:20:28.574824  489898 ubuntu.go:182] provisioning hostname "ha-579393"
	I1014 20:20:28.574892  489898 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:20:28.593311  489898 main.go:141] libmachine: Using SSH client type: native
	I1014 20:20:28.593527  489898 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32913 <nil> <nil>}
	I1014 20:20:28.593539  489898 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-579393 && echo "ha-579393" | sudo tee /etc/hostname
	I1014 20:20:28.751227  489898 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-579393
	
	I1014 20:20:28.751331  489898 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:20:28.771390  489898 main.go:141] libmachine: Using SSH client type: native
	I1014 20:20:28.771598  489898 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32913 <nil> <nil>}
	I1014 20:20:28.771614  489898 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-579393' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-579393/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-579393' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 20:20:28.919191  489898 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 20:20:28.919232  489898 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-413763/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-413763/.minikube}
	I1014 20:20:28.919288  489898 ubuntu.go:190] setting up certificates
	I1014 20:20:28.919304  489898 provision.go:84] configureAuth start
	I1014 20:20:28.919374  489898 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-579393
	I1014 20:20:28.937555  489898 provision.go:143] copyHostCerts
	I1014 20:20:28.937600  489898 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem
	I1014 20:20:28.937642  489898 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem, removing ...
	I1014 20:20:28.937656  489898 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem
	I1014 20:20:28.937748  489898 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem (1078 bytes)
	I1014 20:20:28.938006  489898 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem
	I1014 20:20:28.938042  489898 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem, removing ...
	I1014 20:20:28.938054  489898 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem
	I1014 20:20:28.938107  489898 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem (1123 bytes)
	I1014 20:20:28.938179  489898 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem
	I1014 20:20:28.938205  489898 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem, removing ...
	I1014 20:20:28.938214  489898 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem
	I1014 20:20:28.938254  489898 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem (1675 bytes)
	I1014 20:20:28.938327  489898 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca-key.pem org=jenkins.ha-579393 san=[127.0.0.1 192.168.49.2 ha-579393 localhost minikube]
	I1014 20:20:28.984141  489898 provision.go:177] copyRemoteCerts
	I1014 20:20:28.984206  489898 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 20:20:28.984251  489898 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:20:29.002844  489898 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:20:29.106575  489898 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1014 20:20:29.106640  489898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1014 20:20:29.125278  489898 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1014 20:20:29.125393  489898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 20:20:29.144458  489898 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1014 20:20:29.144532  489898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1014 20:20:29.163285  489898 provision.go:87] duration metric: took 243.963585ms to configureAuth
	I1014 20:20:29.163319  489898 ubuntu.go:206] setting minikube options for container-runtime
	I1014 20:20:29.163543  489898 config.go:182] Loaded profile config "ha-579393": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:20:29.163679  489898 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:20:29.182115  489898 main.go:141] libmachine: Using SSH client type: native
	I1014 20:20:29.182329  489898 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32913 <nil> <nil>}
	I1014 20:20:29.182344  489898 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 20:20:29.446950  489898 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 20:20:29.446980  489898 machine.go:96] duration metric: took 4.044773675s to provisionDockerMachine
	I1014 20:20:29.446995  489898 start.go:293] postStartSetup for "ha-579393" (driver="docker")
	I1014 20:20:29.447007  489898 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 20:20:29.447058  489898 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 20:20:29.447097  489898 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:20:29.465397  489898 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:20:29.570738  489898 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 20:20:29.574668  489898 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1014 20:20:29.574700  489898 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1014 20:20:29.574712  489898 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-413763/.minikube/addons for local assets ...
	I1014 20:20:29.574793  489898 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-413763/.minikube/files for local assets ...
	I1014 20:20:29.574907  489898 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem -> 4173732.pem in /etc/ssl/certs
	I1014 20:20:29.574923  489898 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem -> /etc/ssl/certs/4173732.pem
	I1014 20:20:29.575031  489898 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 20:20:29.583269  489898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem --> /etc/ssl/certs/4173732.pem (1708 bytes)
	I1014 20:20:29.600614  489898 start.go:296] duration metric: took 153.60445ms for postStartSetup
	I1014 20:20:29.600725  489898 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1014 20:20:29.600803  489898 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:20:29.618422  489898 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:20:29.719349  489898 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1014 20:20:29.724065  489898 fix.go:56] duration metric: took 4.622108754s for fixHost
	I1014 20:20:29.724091  489898 start.go:83] releasing machines lock for "ha-579393", held for 4.622163128s
	I1014 20:20:29.724158  489898 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-579393
	I1014 20:20:29.742263  489898 ssh_runner.go:195] Run: cat /version.json
	I1014 20:20:29.742292  489898 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 20:20:29.742312  489898 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:20:29.742360  489898 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:20:29.760806  489898 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:20:29.761892  489898 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:20:29.861522  489898 ssh_runner.go:195] Run: systemctl --version
	I1014 20:20:29.920675  489898 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 20:20:29.958043  489898 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 20:20:29.963013  489898 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 20:20:29.963081  489898 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 20:20:29.971684  489898 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1014 20:20:29.971715  489898 start.go:495] detecting cgroup driver to use...
	I1014 20:20:29.971777  489898 detect.go:190] detected "systemd" cgroup driver on host os
	I1014 20:20:29.971827  489898 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 20:20:29.986651  489898 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 20:20:29.999493  489898 docker.go:218] disabling cri-docker service (if available) ...
	I1014 20:20:29.999555  489898 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 20:20:30.014987  489898 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 20:20:30.028206  489898 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 20:20:30.108561  489898 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 20:20:30.189013  489898 docker.go:234] disabling docker service ...
	I1014 20:20:30.189092  489898 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 20:20:30.205263  489898 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 20:20:30.218011  489898 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 20:20:30.297456  489898 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 20:20:30.378372  489898 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 20:20:30.391541  489898 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 20:20:30.406068  489898 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1014 20:20:30.406139  489898 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:20:30.415378  489898 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1014 20:20:30.415458  489898 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:20:30.425041  489898 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:20:30.434283  489898 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:20:30.443270  489898 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 20:20:30.451367  489898 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:20:30.460460  489898 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:20:30.469171  489898 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:20:30.478459  489898 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 20:20:30.486229  489898 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 20:20:30.493996  489898 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 20:20:30.573307  489898 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 20:20:30.683147  489898 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 20:20:30.683209  489898 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 20:20:30.687341  489898 start.go:563] Will wait 60s for crictl version
	I1014 20:20:30.687394  489898 ssh_runner.go:195] Run: which crictl
	I1014 20:20:30.690908  489898 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1014 20:20:30.716598  489898 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1014 20:20:30.716668  489898 ssh_runner.go:195] Run: crio --version
	I1014 20:20:30.746004  489898 ssh_runner.go:195] Run: crio --version
	I1014 20:20:30.777705  489898 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1014 20:20:30.778957  489898 cli_runner.go:164] Run: docker network inspect ha-579393 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1014 20:20:30.796976  489898 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1014 20:20:30.801378  489898 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 20:20:30.812065  489898 kubeadm.go:883] updating cluster {Name:ha-579393 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-579393 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClien
tPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 20:20:30.812194  489898 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 20:20:30.812256  489898 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 20:20:30.843803  489898 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 20:20:30.843825  489898 crio.go:433] Images already preloaded, skipping extraction
	I1014 20:20:30.843871  489898 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 20:20:30.870297  489898 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 20:20:30.870318  489898 cache_images.go:85] Images are preloaded, skipping loading
	I1014 20:20:30.870326  489898 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1014 20:20:30.870413  489898 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-579393 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-579393 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 20:20:30.870472  489898 ssh_runner.go:195] Run: crio config
	I1014 20:20:30.916212  489898 cni.go:84] Creating CNI manager for ""
	I1014 20:20:30.916239  489898 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1014 20:20:30.916269  489898 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1014 20:20:30.916293  489898 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-579393 NodeName:ha-579393 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1014 20:20:30.916410  489898 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-579393"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 20:20:30.916472  489898 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1014 20:20:30.925261  489898 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 20:20:30.925338  489898 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1014 20:20:30.933417  489898 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1014 20:20:30.946346  489898 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 20:20:30.959345  489898 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1014 20:20:30.972547  489898 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1014 20:20:30.976536  489898 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 20:20:30.987410  489898 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 20:20:31.067104  489898 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 20:20:31.089513  489898 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393 for IP: 192.168.49.2
	I1014 20:20:31.089537  489898 certs.go:195] generating shared ca certs ...
	I1014 20:20:31.089557  489898 certs.go:227] acquiring lock for ca certs: {Name:mk332cc83f1e1747534fc81e896bed94f24941d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:20:31.089728  489898 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.key
	I1014 20:20:31.089804  489898 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.key
	I1014 20:20:31.089820  489898 certs.go:257] generating profile certs ...
	I1014 20:20:31.089945  489898 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/client.key
	I1014 20:20:31.090021  489898 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key.d4ee92c1
	I1014 20:20:31.090072  489898 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.key
	I1014 20:20:31.090088  489898 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1014 20:20:31.090106  489898 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1014 20:20:31.090118  489898 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1014 20:20:31.090131  489898 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1014 20:20:31.090142  489898 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1014 20:20:31.090156  489898 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1014 20:20:31.090168  489898 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1014 20:20:31.090182  489898 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1014 20:20:31.090241  489898 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/417373.pem (1338 bytes)
	W1014 20:20:31.090277  489898 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-413763/.minikube/certs/417373_empty.pem, impossibly tiny 0 bytes
	I1014 20:20:31.090288  489898 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca-key.pem (1675 bytes)
	I1014 20:20:31.090313  489898 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem (1078 bytes)
	I1014 20:20:31.090343  489898 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem (1123 bytes)
	I1014 20:20:31.090372  489898 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem (1675 bytes)
	I1014 20:20:31.090421  489898 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem (1708 bytes)
	I1014 20:20:31.090453  489898 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem -> /usr/share/ca-certificates/4173732.pem
	I1014 20:20:31.090470  489898 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:20:31.090487  489898 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/417373.pem -> /usr/share/ca-certificates/417373.pem
	I1014 20:20:31.091297  489898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 20:20:31.111369  489898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1014 20:20:31.131215  489898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 20:20:31.152691  489898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1014 20:20:31.177685  489898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1014 20:20:31.197344  489898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1014 20:20:31.216500  489898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 20:20:31.234564  489898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1014 20:20:31.252166  489898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem --> /usr/share/ca-certificates/4173732.pem (1708 bytes)
	I1014 20:20:31.269606  489898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 20:20:31.288166  489898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/certs/417373.pem --> /usr/share/ca-certificates/417373.pem (1338 bytes)
	I1014 20:20:31.305894  489898 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 20:20:31.318425  489898 ssh_runner.go:195] Run: openssl version
	I1014 20:20:31.324791  489898 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4173732.pem && ln -fs /usr/share/ca-certificates/4173732.pem /etc/ssl/certs/4173732.pem"
	I1014 20:20:31.333410  489898 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4173732.pem
	I1014 20:20:31.337628  489898 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 19:32 /usr/share/ca-certificates/4173732.pem
	I1014 20:20:31.337704  489898 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4173732.pem
	I1014 20:20:31.372321  489898 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4173732.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 20:20:31.381116  489898 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 20:20:31.390138  489898 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:20:31.394052  489898 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 19:15 /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:20:31.394109  489898 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:20:31.429938  489898 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 20:20:31.438655  489898 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/417373.pem && ln -fs /usr/share/ca-certificates/417373.pem /etc/ssl/certs/417373.pem"
	I1014 20:20:31.447298  489898 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/417373.pem
	I1014 20:20:31.451279  489898 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 19:32 /usr/share/ca-certificates/417373.pem
	I1014 20:20:31.451343  489898 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/417373.pem
	I1014 20:20:31.485062  489898 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/417373.pem /etc/ssl/certs/51391683.0"
	I1014 20:20:31.493976  489898 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 20:20:31.498163  489898 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1014 20:20:31.532437  489898 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1014 20:20:31.569216  489898 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1014 20:20:31.605892  489898 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1014 20:20:31.653534  489898 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1014 20:20:31.690955  489898 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1014 20:20:31.725979  489898 kubeadm.go:400] StartCluster: {Name:ha-579393 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-579393 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPa
th: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 20:20:31.726143  489898 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 20:20:31.726202  489898 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 20:20:31.755641  489898 cri.go:89] found id: ""
	I1014 20:20:31.755728  489898 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 20:20:31.764571  489898 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1014 20:20:31.764596  489898 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1014 20:20:31.764641  489898 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1014 20:20:31.772544  489898 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1014 20:20:31.772997  489898 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-579393" does not appear in /home/jenkins/minikube-integration/21409-413763/kubeconfig
	I1014 20:20:31.773109  489898 kubeconfig.go:62] /home/jenkins/minikube-integration/21409-413763/kubeconfig needs updating (will repair): [kubeconfig missing "ha-579393" cluster setting kubeconfig missing "ha-579393" context setting]
	I1014 20:20:31.773353  489898 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/kubeconfig: {Name:mk8f52e50cf00a1f90024c40a36e49753857e33b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:20:31.773843  489898 kapi.go:59] client config for ha-579393: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/client.crt", KeyFile:"/home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/client.key", CAFile:"/home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1014 20:20:31.774269  489898 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1014 20:20:31.774283  489898 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1014 20:20:31.774287  489898 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1014 20:20:31.774291  489898 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1014 20:20:31.774297  489898 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1014 20:20:31.774312  489898 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1014 20:20:31.774673  489898 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1014 20:20:31.783543  489898 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1014 20:20:31.783582  489898 kubeadm.go:601] duration metric: took 18.979903ms to restartPrimaryControlPlane
	I1014 20:20:31.783595  489898 kubeadm.go:402] duration metric: took 57.628352ms to StartCluster
	I1014 20:20:31.783616  489898 settings.go:142] acquiring lock: {Name:mke99e63954bc0385c76f9fa1a80091fa7740a6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:20:31.783711  489898 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21409-413763/kubeconfig
	I1014 20:20:31.784245  489898 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/kubeconfig: {Name:mk8f52e50cf00a1f90024c40a36e49753857e33b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:20:31.784483  489898 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 20:20:31.784537  489898 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1014 20:20:31.784634  489898 addons.go:69] Setting storage-provisioner=true in profile "ha-579393"
	I1014 20:20:31.784650  489898 addons.go:69] Setting default-storageclass=true in profile "ha-579393"
	I1014 20:20:31.784678  489898 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-579393"
	I1014 20:20:31.784687  489898 config.go:182] Loaded profile config "ha-579393": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:20:31.784656  489898 addons.go:238] Setting addon storage-provisioner=true in "ha-579393"
	I1014 20:20:31.784839  489898 host.go:66] Checking if "ha-579393" exists ...
	I1014 20:20:31.784988  489898 cli_runner.go:164] Run: docker container inspect ha-579393 --format={{.State.Status}}
	I1014 20:20:31.785316  489898 cli_runner.go:164] Run: docker container inspect ha-579393 --format={{.State.Status}}
	I1014 20:20:31.789929  489898 out.go:179] * Verifying Kubernetes components...
	I1014 20:20:31.791591  489898 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 20:20:31.805965  489898 kapi.go:59] client config for ha-579393: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/client.crt", KeyFile:"/home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/client.key", CAFile:"/home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1014 20:20:31.806386  489898 addons.go:238] Setting addon default-storageclass=true in "ha-579393"
	I1014 20:20:31.806441  489898 host.go:66] Checking if "ha-579393" exists ...
	I1014 20:20:31.806931  489898 cli_runner.go:164] Run: docker container inspect ha-579393 --format={{.State.Status}}
	I1014 20:20:31.807584  489898 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 20:20:31.809119  489898 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 20:20:31.809148  489898 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1014 20:20:31.809214  489898 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:20:31.832877  489898 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1014 20:20:31.832915  489898 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1014 20:20:31.832999  489898 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:20:31.836985  489898 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:20:31.854396  489898 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:20:31.900722  489898 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 20:20:31.915259  489898 node_ready.go:35] waiting up to 6m0s for node "ha-579393" to be "Ready" ...
	I1014 20:20:31.948248  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 20:20:31.965203  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:20:32.010301  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:32.010345  489898 retry.go:31] will retry after 180.735659ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:20:32.026606  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:32.026655  489898 retry.go:31] will retry after 185.14299ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:32.191908  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 20:20:32.212727  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:20:32.261347  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:32.261379  489898 retry.go:31] will retry after 400.487372ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:20:32.273847  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:32.273879  489898 retry.go:31] will retry after 332.539123ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:32.606897  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:20:32.660842  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:32.660884  489898 retry.go:31] will retry after 506.115799ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:32.662966  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1014 20:20:32.717555  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:32.717593  489898 retry.go:31] will retry after 698.279488ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:33.167777  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:20:33.223185  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:33.223218  489898 retry.go:31] will retry after 929.627856ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:33.416016  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1014 20:20:33.471972  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:33.472005  489898 retry.go:31] will retry after 760.905339ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:20:33.916053  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:20:34.153507  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:20:34.208070  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:34.208106  489898 retry.go:31] will retry after 1.612829525s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:34.233328  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1014 20:20:34.287658  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:34.287702  489898 retry.go:31] will retry after 818.99186ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:35.107035  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1014 20:20:35.161369  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:35.161406  489898 retry.go:31] will retry after 2.372177473s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:35.821805  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:20:35.876422  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:35.876462  489898 retry.go:31] will retry after 1.76203735s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:20:36.416224  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:20:37.533877  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1014 20:20:37.589802  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:37.589836  489898 retry.go:31] will retry after 2.151742617s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:37.639147  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:20:37.694173  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:37.694209  489898 retry.go:31] will retry after 2.414165218s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:20:38.916349  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:20:39.741973  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1014 20:20:39.798810  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:39.798851  489898 retry.go:31] will retry after 6.380239181s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:40.109367  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:20:40.165446  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:40.165488  489898 retry.go:31] will retry after 4.273629229s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:20:40.916572  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:20:43.416160  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:20:44.439617  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:20:44.495805  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:44.495847  489898 retry.go:31] will retry after 5.884728712s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:20:45.916420  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:20:46.179913  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1014 20:20:46.236772  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:46.236810  489898 retry.go:31] will retry after 6.359293031s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:20:48.416258  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:20:50.381581  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:20:50.416856  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:20:50.439004  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:50.439036  489898 retry.go:31] will retry after 11.771270745s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:52.597189  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1014 20:20:52.652445  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:52.652476  489898 retry.go:31] will retry after 10.720617277s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:20:52.916399  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:20:54.916966  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:20:57.416509  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:20:59.416864  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:21:01.916327  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:21:02.210789  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:21:02.266987  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:21:02.267021  489898 retry.go:31] will retry after 17.660934523s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:21:03.373440  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1014 20:21:03.428855  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:21:03.428886  489898 retry.go:31] will retry after 19.842704585s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:21:04.416008  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:21:06.416555  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:21:08.916547  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:21:11.416206  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:21:13.416608  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:21:15.916358  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:21:18.416349  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:21:19.929156  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:21:19.984438  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:21:19.984472  489898 retry.go:31] will retry after 17.500549438s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:21:20.416573  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:21:22.916397  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:21:23.271863  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1014 20:21:23.329260  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:21:23.329293  489898 retry.go:31] will retry after 15.097428161s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:21:24.916706  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:21:27.416721  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:21:29.916916  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:21:32.416582  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:21:34.916674  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:21:37.416493  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:21:37.485708  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:21:37.543070  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:21:37.543103  489898 retry.go:31] will retry after 40.949070497s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:21:38.427486  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1014 20:21:38.483097  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:21:38.483131  489898 retry.go:31] will retry after 43.966081483s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:21:39.916663  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:21:42.416063  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:21:44.416626  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:21:46.916599  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:21:49.416540  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:21:51.916813  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:21:54.416366  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:21:56.916158  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:21:58.916828  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:22:01.416335  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:22:03.916080  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:22:05.916821  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:22:08.416505  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:22:10.916302  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:22:13.416113  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:22:15.416873  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:22:17.915856  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:22:18.493252  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:22:18.550465  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:22:18.550634  489898 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1014 20:22:19.916885  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:22:22.415932  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:22:22.450166  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1014 20:22:22.505420  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:22:22.505542  489898 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1014 20:22:22.507884  489898 out.go:179] * Enabled addons: 
	I1014 20:22:22.509381  489898 addons.go:514] duration metric: took 1m50.724843787s for enable addons: enabled=[]
	W1014 20:22:24.416034  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:22:26.416184  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:22:28.416951  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:22:30.915926  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:22:32.916501  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:22:35.416234  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:22:37.916248  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:22:40.416105  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:22:42.416850  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:22:44.916913  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:22:47.416908  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:22:49.916150  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:22:52.416224  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:22:54.916288  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:22:57.416136  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:22:59.916282  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:23:02.416423  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:23:04.916846  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:23:07.416742  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:23:09.916646  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:23:12.416492  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:23:14.916627  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:23:17.416573  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:23:19.916979  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:23:22.416907  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:23:24.916676  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:23:27.416100  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:23:29.416688  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:23:31.916476  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:23:33.916684  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:23:36.415978  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:23:38.416324  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:23:40.916275  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:23:43.416374  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:23:45.916584  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:23:48.416514  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:23:50.916574  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:23:53.416488  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:23:55.916368  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:23:58.416228  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:00.916323  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:03.416241  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:05.916163  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:08.416132  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:10.416813  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:12.916784  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:15.416869  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:17.916799  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:20.416685  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:22.916800  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:25.416843  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:27.916316  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:29.916868  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:32.416204  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:34.416320  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:36.416834  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:38.916212  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:40.916807  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:43.416048  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:45.916074  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:47.916191  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:49.916569  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:52.415930  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:54.916217  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:57.415888  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:59.416062  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:01.416475  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:03.916023  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:05.916303  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:08.416300  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:10.416919  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:12.916244  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:14.916535  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:16.916873  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:19.416132  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:21.416381  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:23.915957  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:25.916320  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:28.416305  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:30.416865  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:32.916038  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:34.916413  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:37.416363  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:39.916738  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:42.416168  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:44.916517  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:47.416590  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:49.416883  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:51.916394  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:54.416277  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:56.416473  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:58.916319  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:26:01.415934  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:26:03.416196  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:26:05.416655  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:26:07.916910  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:26:10.416263  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:26:12.416368  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:26:14.916632  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:26:16.916923  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:26:19.415911  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:26:21.416220  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:26:23.416304  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:26:25.416816  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:26:27.916321  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:26:29.916832  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:26:31.915871  489898 node_ready.go:38] duration metric: took 6m0.000553348s for node "ha-579393" to be "Ready" ...
	I1014 20:26:31.918599  489898 out.go:203] 
	W1014 20:26:31.920009  489898 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1014 20:26:31.920031  489898 out.go:285] * 
	W1014 20:26:31.921790  489898 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 20:26:31.923205  489898 out.go:203] 
	
	
	==> CRI-O <==
	Oct 14 20:26:21 ha-579393 crio[522]: time="2025-10-14T20:26:21.210864178Z" level=info msg="createCtr: removing container 1cd2cdfbb298744443a4df2b5fe2f19d661d9735f28661fc953c26c38a24f373" id=caadb247-1972-4c59-b251-31eb63ec6f85 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:26:21 ha-579393 crio[522]: time="2025-10-14T20:26:21.210898395Z" level=info msg="createCtr: deleting container 1cd2cdfbb298744443a4df2b5fe2f19d661d9735f28661fc953c26c38a24f373 from storage" id=caadb247-1972-4c59-b251-31eb63ec6f85 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:26:21 ha-579393 crio[522]: time="2025-10-14T20:26:21.213076991Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-ha-579393_kube-system_8c15ab9dd5834e64ae44874faddf585d_0" id=caadb247-1972-4c59-b251-31eb63ec6f85 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:26:23 ha-579393 crio[522]: time="2025-10-14T20:26:23.182831465Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=b98c73d2-b7e6-4811-b3d6-091dbd3d23ec name=/runtime.v1.ImageService/ImageStatus
	Oct 14 20:26:23 ha-579393 crio[522]: time="2025-10-14T20:26:23.183772574Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=c4456e99-9bad-4b09-8dfd-a4a806d36455 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 20:26:23 ha-579393 crio[522]: time="2025-10-14T20:26:23.184795159Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-579393/kube-controller-manager" id=e8033ccc-5e2e-46a7-b2da-b366e6639b16 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:26:23 ha-579393 crio[522]: time="2025-10-14T20:26:23.185069439Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:26:23 ha-579393 crio[522]: time="2025-10-14T20:26:23.188460383Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:26:23 ha-579393 crio[522]: time="2025-10-14T20:26:23.1889532Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:26:23 ha-579393 crio[522]: time="2025-10-14T20:26:23.205250953Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=e8033ccc-5e2e-46a7-b2da-b366e6639b16 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:26:23 ha-579393 crio[522]: time="2025-10-14T20:26:23.206739815Z" level=info msg="createCtr: deleting container ID 9d48a70113549e3f7d42603bc93ab58215496d1a2be0313bfda784ca93c63048 from idIndex" id=e8033ccc-5e2e-46a7-b2da-b366e6639b16 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:26:23 ha-579393 crio[522]: time="2025-10-14T20:26:23.206804664Z" level=info msg="createCtr: removing container 9d48a70113549e3f7d42603bc93ab58215496d1a2be0313bfda784ca93c63048" id=e8033ccc-5e2e-46a7-b2da-b366e6639b16 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:26:23 ha-579393 crio[522]: time="2025-10-14T20:26:23.206850624Z" level=info msg="createCtr: deleting container 9d48a70113549e3f7d42603bc93ab58215496d1a2be0313bfda784ca93c63048 from storage" id=e8033ccc-5e2e-46a7-b2da-b366e6639b16 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:26:23 ha-579393 crio[522]: time="2025-10-14T20:26:23.209012681Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-579393_kube-system_514451ea1eb9e52e24cc36daace2ea4a_0" id=e8033ccc-5e2e-46a7-b2da-b366e6639b16 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:26:30 ha-579393 crio[522]: time="2025-10-14T20:26:30.183633141Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=167c86e5-60b3-47f2-b9b8-98db022ed999 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 20:26:30 ha-579393 crio[522]: time="2025-10-14T20:26:30.184723744Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=892d8b85-81c2-4b15-90ff-ef55e352edc6 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 20:26:30 ha-579393 crio[522]: time="2025-10-14T20:26:30.185932607Z" level=info msg="Creating container: kube-system/kube-apiserver-ha-579393/kube-apiserver" id=d2ebc873-5f97-487a-b032-921c5db7d287 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:26:30 ha-579393 crio[522]: time="2025-10-14T20:26:30.186213766Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:26:30 ha-579393 crio[522]: time="2025-10-14T20:26:30.190737476Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:26:30 ha-579393 crio[522]: time="2025-10-14T20:26:30.191229955Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:26:30 ha-579393 crio[522]: time="2025-10-14T20:26:30.207102111Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=d2ebc873-5f97-487a-b032-921c5db7d287 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:26:30 ha-579393 crio[522]: time="2025-10-14T20:26:30.208579169Z" level=info msg="createCtr: deleting container ID 688827a153cab44b50f97bc346359c3a2feba1bdd0b8a7d0d006066ccf422f53 from idIndex" id=d2ebc873-5f97-487a-b032-921c5db7d287 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:26:30 ha-579393 crio[522]: time="2025-10-14T20:26:30.208619796Z" level=info msg="createCtr: removing container 688827a153cab44b50f97bc346359c3a2feba1bdd0b8a7d0d006066ccf422f53" id=d2ebc873-5f97-487a-b032-921c5db7d287 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:26:30 ha-579393 crio[522]: time="2025-10-14T20:26:30.208664794Z" level=info msg="createCtr: deleting container 688827a153cab44b50f97bc346359c3a2feba1bdd0b8a7d0d006066ccf422f53 from storage" id=d2ebc873-5f97-487a-b032-921c5db7d287 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:26:30 ha-579393 crio[522]: time="2025-10-14T20:26:30.210726815Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-ha-579393_kube-system_06869f9c490a5fffb50940ac23939d18_0" id=d2ebc873-5f97-487a-b032-921c5db7d287 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 20:26:32.891716    2004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:26:32.892248    2004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:26:32.893804    2004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:26:32.894295    2004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:26:32.896069    2004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 91 42 a8 46 f5 08 06
	[ +33.952077] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff aa 17 60 38 7c 63 08 06
	[  +0.961658] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ba 77 81 98 05 a1 08 06
	[  +0.043334] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 8a 2f 57 55 4c d7 08 06
	[  +4.681081] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f6 ac 51 a4 b2 5a 08 06
	[Oct14 19:04] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e6 d1 de e1 85 25 08 06
	[  +0.899784] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ae f2 fe 51 3d 95 08 06
	[  +0.040749] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 82 c6 36 89 b8 4a 08 06
	[  +4.162603] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c6 62 d5 5c 0e ef 08 06
	[ +30.801371] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 06 ae e5 28 c7 39 08 06
	[  +0.817897] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ae 5e 5e 90 62 1a 08 06
	[  +0.046959] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 96 8f 1b bd b9 9f 08 06
	[Oct14 19:05] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 56 59 c3 7c db c4 08 06
	
	
	==> kernel <==
	 20:26:32 up  3:08,  0 user,  load average: 0.16, 0.07, 0.25
	Linux ha-579393 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 14 20:26:21 ha-579393 kubelet[673]: E1014 20:26:21.213484     673 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 14 20:26:21 ha-579393 kubelet[673]:         container kube-scheduler start failed in pod kube-scheduler-ha-579393_kube-system(8c15ab9dd5834e64ae44874faddf585d): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 14 20:26:21 ha-579393 kubelet[673]:  > logger="UnhandledError"
	Oct 14 20:26:21 ha-579393 kubelet[673]: E1014 20:26:21.213521     673 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-ha-579393" podUID="8c15ab9dd5834e64ae44874faddf585d"
	Oct 14 20:26:23 ha-579393 kubelet[673]: E1014 20:26:23.182345     673 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-579393\" not found" node="ha-579393"
	Oct 14 20:26:23 ha-579393 kubelet[673]: E1014 20:26:23.209352     673 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 14 20:26:23 ha-579393 kubelet[673]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 14 20:26:23 ha-579393 kubelet[673]:  > podSandboxID="e004621e8ef5d1c744f5e29ef568b6945621102bc660bbf10fca36877a257351"
	Oct 14 20:26:23 ha-579393 kubelet[673]: E1014 20:26:23.209455     673 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 14 20:26:23 ha-579393 kubelet[673]:         container kube-controller-manager start failed in pod kube-controller-manager-ha-579393_kube-system(514451ea1eb9e52e24cc36daace2ea4a): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 14 20:26:23 ha-579393 kubelet[673]:  > logger="UnhandledError"
	Oct 14 20:26:23 ha-579393 kubelet[673]: E1014 20:26:23.209490     673 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-579393" podUID="514451ea1eb9e52e24cc36daace2ea4a"
	Oct 14 20:26:26 ha-579393 kubelet[673]: E1014 20:26:26.797604     673 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-579393.186e75138c8fc15b  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-579393,UID:ha-579393,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-579393 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-579393,},FirstTimestamp:2025-10-14 20:20:31.171502427 +0000 UTC m=+0.079567776,LastTimestamp:2025-10-14 20:20:31.171502427 +0000 UTC m=+0.079567776,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-579393,}"
	Oct 14 20:26:26 ha-579393 kubelet[673]: E1014 20:26:26.823387     673 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-579393?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 14 20:26:26 ha-579393 kubelet[673]: I1014 20:26:26.999561     673 kubelet_node_status.go:75] "Attempting to register node" node="ha-579393"
	Oct 14 20:26:27 ha-579393 kubelet[673]: E1014 20:26:26.999973     673 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-579393"
	Oct 14 20:26:30 ha-579393 kubelet[673]: E1014 20:26:30.183164     673 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-579393\" not found" node="ha-579393"
	Oct 14 20:26:30 ha-579393 kubelet[673]: E1014 20:26:30.211092     673 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 14 20:26:30 ha-579393 kubelet[673]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 14 20:26:30 ha-579393 kubelet[673]:  > podSandboxID="ace7e840fe529dab46ef907372a1c92e141c023a259dd49f8023c7b15dcf1a62"
	Oct 14 20:26:30 ha-579393 kubelet[673]: E1014 20:26:30.211196     673 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 14 20:26:30 ha-579393 kubelet[673]:         container kube-apiserver start failed in pod kube-apiserver-ha-579393_kube-system(06869f9c490a5fffb50940ac23939d18): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 14 20:26:30 ha-579393 kubelet[673]:  > logger="UnhandledError"
	Oct 14 20:26:30 ha-579393 kubelet[673]: E1014 20:26:30.211232     673 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-ha-579393" podUID="06869f9c490a5fffb50940ac23939d18"
	Oct 14 20:26:31 ha-579393 kubelet[673]: E1014 20:26:31.198909     673 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-579393\" not found"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-579393 -n ha-579393
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-579393 -n ha-579393: exit status 2 (309.102257ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "ha-579393" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (368.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (1.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:415: expected profile "ha-579393" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-579393\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-579393\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSS
haresRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-579393\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":nul
l,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list
--output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterClusterRestart]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterClusterRestart]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-579393
helpers_test.go:243: (dbg) docker inspect ha-579393:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2",
	        "Created": "2025-10-14T20:03:22.416166993Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 490098,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-14T20:20:25.14898301Z",
	            "FinishedAt": "2025-10-14T20:20:23.81110806Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2/hostname",
	        "HostsPath": "/var/lib/docker/containers/e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2/hosts",
	        "LogPath": "/var/lib/docker/containers/e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2/e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2-json.log",
	        "Name": "/ha-579393",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-579393:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-579393",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2",
	                "LowerDir": "/var/lib/docker/overlay2/f2862f7f555e5163c5fb738e59e646f87e5b6f6dc38621bbd5cf8a3ccf2dc40d-init/diff:/var/lib/docker/overlay2/51cca15559205c73df9571f03495ed8a29f085405673af5fef2c1ba7c695d8af/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f2862f7f555e5163c5fb738e59e646f87e5b6f6dc38621bbd5cf8a3ccf2dc40d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f2862f7f555e5163c5fb738e59e646f87e5b6f6dc38621bbd5cf8a3ccf2dc40d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f2862f7f555e5163c5fb738e59e646f87e5b6f6dc38621bbd5cf8a3ccf2dc40d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-579393",
	                "Source": "/var/lib/docker/volumes/ha-579393/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-579393",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-579393",
	                "name.minikube.sigs.k8s.io": "ha-579393",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c02e504f33501e83f3b9b4187e2f5d221a1e738b5d0f6faf24795ae2990234ba",
	            "SandboxKey": "/var/run/docker/netns/c02e504f3350",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32913"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32914"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32917"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32915"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32916"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-579393": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "42:54:39:d3:9a:de",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d686e445c44d566d468791372c74214f3cfc87adaf2fbedd68822ff7d3ce8955",
	                    "EndpointID": "aaae67063c867134e15c0950594bd5a6f0ea17d0626d59a73f395e78fd0d78e2",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-579393",
	                        "e999454f4483"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-579393 -n ha-579393
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-579393 -n ha-579393: exit status 2 (317.678547ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/DegradedAfterClusterRestart FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterClusterRestart]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-579393 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/DegradedAfterClusterRestart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                             ARGS                                             │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ kubectl │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:12 UTC │                     │
	│ kubectl │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:12 UTC │                     │
	│ kubectl │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:12 UTC │                     │
	│ kubectl │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ kubectl │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                        │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ kubectl │ ha-579393 kubectl -- exec  -- nslookup kubernetes.io                                         │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ kubectl │ ha-579393 kubectl -- exec  -- nslookup kubernetes.default                                    │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ kubectl │ ha-579393 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                  │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ kubectl │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                        │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ node    │ ha-579393 node add --alsologtostderr -v 5                                                    │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ node    │ ha-579393 node stop m02 --alsologtostderr -v 5                                               │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ node    │ ha-579393 node start m02 --alsologtostderr -v 5                                              │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ node    │ ha-579393 node list --alsologtostderr -v 5                                                   │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:14 UTC │                     │
	│ stop    │ ha-579393 stop --alsologtostderr -v 5                                                        │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:14 UTC │ 14 Oct 25 20:14 UTC │
	│ start   │ ha-579393 start --wait true --alsologtostderr -v 5                                           │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:14 UTC │                     │
	│ node    │ ha-579393 node list --alsologtostderr -v 5                                                   │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:20 UTC │                     │
	│ node    │ ha-579393 node delete m03 --alsologtostderr -v 5                                             │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:20 UTC │                     │
	│ stop    │ ha-579393 stop --alsologtostderr -v 5                                                        │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:20 UTC │ 14 Oct 25 20:20 UTC │
	│ start   │ ha-579393 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:20 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/14 20:20:24
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1014 20:20:24.896990  489898 out.go:360] Setting OutFile to fd 1 ...
	I1014 20:20:24.897284  489898 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:20:24.897296  489898 out.go:374] Setting ErrFile to fd 2...
	I1014 20:20:24.897302  489898 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:20:24.897542  489898 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-413763/.minikube/bin
	I1014 20:20:24.898101  489898 out.go:368] Setting JSON to false
	I1014 20:20:24.899124  489898 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":10971,"bootTime":1760462254,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1014 20:20:24.899246  489898 start.go:141] virtualization: kvm guest
	I1014 20:20:24.901656  489898 out.go:179] * [ha-579393] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1014 20:20:24.903433  489898 notify.go:220] Checking for updates...
	I1014 20:20:24.903468  489898 out.go:179]   - MINIKUBE_LOCATION=21409
	I1014 20:20:24.905313  489898 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 20:20:24.907144  489898 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-413763/kubeconfig
	I1014 20:20:24.908684  489898 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-413763/.minikube
	I1014 20:20:24.910248  489898 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1014 20:20:24.911693  489898 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 20:20:24.913427  489898 config.go:182] Loaded profile config "ha-579393": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:20:24.913995  489898 driver.go:421] Setting default libvirt URI to qemu:///system
	I1014 20:20:24.938824  489898 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1014 20:20:24.939019  489898 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 20:20:25.007228  489898 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-14 20:20:24.996574045 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1014 20:20:25.007350  489898 docker.go:318] overlay module found
	I1014 20:20:25.010286  489898 out.go:179] * Using the docker driver based on existing profile
	I1014 20:20:25.011792  489898 start.go:305] selected driver: docker
	I1014 20:20:25.011819  489898 start.go:925] validating driver "docker" against &{Name:ha-579393 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-579393 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 20:20:25.011930  489898 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 20:20:25.012031  489898 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 20:20:25.072380  489898 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-14 20:20:25.062155128 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1014 20:20:25.073231  489898 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 20:20:25.073267  489898 cni.go:84] Creating CNI manager for ""
	I1014 20:20:25.073308  489898 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1014 20:20:25.073364  489898 start.go:349] cluster config:
	{Name:ha-579393 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-579393 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GP
Us: AutoPauseInterval:1m0s}
	I1014 20:20:25.075991  489898 out.go:179] * Starting "ha-579393" primary control-plane node in "ha-579393" cluster
	I1014 20:20:25.077302  489898 cache.go:123] Beginning downloading kic base image for docker with crio
	I1014 20:20:25.079216  489898 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1014 20:20:25.080637  489898 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 20:20:25.080691  489898 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-413763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1014 20:20:25.080700  489898 cache.go:58] Caching tarball of preloaded images
	I1014 20:20:25.080769  489898 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1014 20:20:25.080800  489898 preload.go:233] Found /home/jenkins/minikube-integration/21409-413763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1014 20:20:25.080809  489898 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1014 20:20:25.080900  489898 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/config.json ...
	I1014 20:20:25.101692  489898 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1014 20:20:25.101730  489898 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1014 20:20:25.101767  489898 cache.go:232] Successfully downloaded all kic artifacts
	I1014 20:20:25.101806  489898 start.go:360] acquireMachinesLock for ha-579393: {Name:mk1f05a584fb3418ecbc78031a467c0e06c7a892 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 20:20:25.101915  489898 start.go:364] duration metric: took 60.146µs to acquireMachinesLock for "ha-579393"
	I1014 20:20:25.101941  489898 start.go:96] Skipping create...Using existing machine configuration
	I1014 20:20:25.101949  489898 fix.go:54] fixHost starting: 
	I1014 20:20:25.102193  489898 cli_runner.go:164] Run: docker container inspect ha-579393 --format={{.State.Status}}
	I1014 20:20:25.120107  489898 fix.go:112] recreateIfNeeded on ha-579393: state=Stopped err=<nil>
	W1014 20:20:25.120156  489898 fix.go:138] unexpected machine state, will restart: <nil>
	I1014 20:20:25.122045  489898 out.go:252] * Restarting existing docker container for "ha-579393" ...
	I1014 20:20:25.122122  489898 cli_runner.go:164] Run: docker start ha-579393
	I1014 20:20:25.362928  489898 cli_runner.go:164] Run: docker container inspect ha-579393 --format={{.State.Status}}
	I1014 20:20:25.381546  489898 kic.go:430] container "ha-579393" state is running.
	I1014 20:20:25.382015  489898 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-579393
	I1014 20:20:25.401922  489898 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/config.json ...
	I1014 20:20:25.402188  489898 machine.go:93] provisionDockerMachine start ...
	I1014 20:20:25.402264  489898 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:20:25.421465  489898 main.go:141] libmachine: Using SSH client type: native
	I1014 20:20:25.421725  489898 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32913 <nil> <nil>}
	I1014 20:20:25.421746  489898 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 20:20:25.422396  489898 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40524->127.0.0.1:32913: read: connection reset by peer
	I1014 20:20:28.574789  489898 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-579393
	
	I1014 20:20:28.574824  489898 ubuntu.go:182] provisioning hostname "ha-579393"
	I1014 20:20:28.574892  489898 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:20:28.593311  489898 main.go:141] libmachine: Using SSH client type: native
	I1014 20:20:28.593527  489898 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32913 <nil> <nil>}
	I1014 20:20:28.593539  489898 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-579393 && echo "ha-579393" | sudo tee /etc/hostname
	I1014 20:20:28.751227  489898 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-579393
	
	I1014 20:20:28.751331  489898 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:20:28.771390  489898 main.go:141] libmachine: Using SSH client type: native
	I1014 20:20:28.771598  489898 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32913 <nil> <nil>}
	I1014 20:20:28.771614  489898 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-579393' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-579393/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-579393' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 20:20:28.919191  489898 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 20:20:28.919232  489898 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-413763/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-413763/.minikube}
	I1014 20:20:28.919288  489898 ubuntu.go:190] setting up certificates
	I1014 20:20:28.919304  489898 provision.go:84] configureAuth start
	I1014 20:20:28.919374  489898 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-579393
	I1014 20:20:28.937555  489898 provision.go:143] copyHostCerts
	I1014 20:20:28.937600  489898 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem
	I1014 20:20:28.937642  489898 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem, removing ...
	I1014 20:20:28.937656  489898 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem
	I1014 20:20:28.937748  489898 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem (1078 bytes)
	I1014 20:20:28.938006  489898 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem
	I1014 20:20:28.938042  489898 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem, removing ...
	I1014 20:20:28.938054  489898 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem
	I1014 20:20:28.938107  489898 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem (1123 bytes)
	I1014 20:20:28.938179  489898 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem
	I1014 20:20:28.938205  489898 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem, removing ...
	I1014 20:20:28.938214  489898 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem
	I1014 20:20:28.938254  489898 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem (1675 bytes)
	I1014 20:20:28.938327  489898 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca-key.pem org=jenkins.ha-579393 san=[127.0.0.1 192.168.49.2 ha-579393 localhost minikube]
	I1014 20:20:28.984141  489898 provision.go:177] copyRemoteCerts
	I1014 20:20:28.984206  489898 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 20:20:28.984251  489898 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:20:29.002844  489898 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:20:29.106575  489898 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1014 20:20:29.106640  489898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1014 20:20:29.125278  489898 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1014 20:20:29.125393  489898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 20:20:29.144458  489898 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1014 20:20:29.144532  489898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1014 20:20:29.163285  489898 provision.go:87] duration metric: took 243.963585ms to configureAuth
	I1014 20:20:29.163319  489898 ubuntu.go:206] setting minikube options for container-runtime
	I1014 20:20:29.163543  489898 config.go:182] Loaded profile config "ha-579393": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:20:29.163679  489898 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:20:29.182115  489898 main.go:141] libmachine: Using SSH client type: native
	I1014 20:20:29.182329  489898 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32913 <nil> <nil>}
	I1014 20:20:29.182344  489898 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 20:20:29.446950  489898 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 20:20:29.446980  489898 machine.go:96] duration metric: took 4.044773675s to provisionDockerMachine
	I1014 20:20:29.446995  489898 start.go:293] postStartSetup for "ha-579393" (driver="docker")
	I1014 20:20:29.447007  489898 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 20:20:29.447058  489898 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 20:20:29.447097  489898 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:20:29.465397  489898 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:20:29.570738  489898 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 20:20:29.574668  489898 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1014 20:20:29.574700  489898 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1014 20:20:29.574712  489898 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-413763/.minikube/addons for local assets ...
	I1014 20:20:29.574793  489898 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-413763/.minikube/files for local assets ...
	I1014 20:20:29.574907  489898 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem -> 4173732.pem in /etc/ssl/certs
	I1014 20:20:29.574923  489898 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem -> /etc/ssl/certs/4173732.pem
	I1014 20:20:29.575031  489898 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 20:20:29.583269  489898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem --> /etc/ssl/certs/4173732.pem (1708 bytes)
	I1014 20:20:29.600614  489898 start.go:296] duration metric: took 153.60445ms for postStartSetup
	I1014 20:20:29.600725  489898 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1014 20:20:29.600803  489898 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:20:29.618422  489898 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:20:29.719349  489898 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1014 20:20:29.724065  489898 fix.go:56] duration metric: took 4.622108754s for fixHost
	I1014 20:20:29.724091  489898 start.go:83] releasing machines lock for "ha-579393", held for 4.622163128s
	I1014 20:20:29.724158  489898 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-579393
	I1014 20:20:29.742263  489898 ssh_runner.go:195] Run: cat /version.json
	I1014 20:20:29.742292  489898 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 20:20:29.742312  489898 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:20:29.742360  489898 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:20:29.760806  489898 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:20:29.761892  489898 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:20:29.861522  489898 ssh_runner.go:195] Run: systemctl --version
	I1014 20:20:29.920675  489898 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 20:20:29.958043  489898 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 20:20:29.963013  489898 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 20:20:29.963081  489898 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 20:20:29.971684  489898 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1014 20:20:29.971715  489898 start.go:495] detecting cgroup driver to use...
	I1014 20:20:29.971777  489898 detect.go:190] detected "systemd" cgroup driver on host os
	I1014 20:20:29.971827  489898 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 20:20:29.986651  489898 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 20:20:29.999493  489898 docker.go:218] disabling cri-docker service (if available) ...
	I1014 20:20:29.999555  489898 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 20:20:30.014987  489898 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 20:20:30.028206  489898 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 20:20:30.108561  489898 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 20:20:30.189013  489898 docker.go:234] disabling docker service ...
	I1014 20:20:30.189092  489898 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 20:20:30.205263  489898 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 20:20:30.218011  489898 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 20:20:30.297456  489898 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 20:20:30.378372  489898 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 20:20:30.391541  489898 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 20:20:30.406068  489898 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1014 20:20:30.406139  489898 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:20:30.415378  489898 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1014 20:20:30.415458  489898 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:20:30.425041  489898 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:20:30.434283  489898 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:20:30.443270  489898 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 20:20:30.451367  489898 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:20:30.460460  489898 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:20:30.469171  489898 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:20:30.478459  489898 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 20:20:30.486229  489898 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 20:20:30.493996  489898 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 20:20:30.573307  489898 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 20:20:30.683147  489898 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 20:20:30.683209  489898 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 20:20:30.687341  489898 start.go:563] Will wait 60s for crictl version
	I1014 20:20:30.687394  489898 ssh_runner.go:195] Run: which crictl
	I1014 20:20:30.690908  489898 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1014 20:20:30.716598  489898 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1014 20:20:30.716668  489898 ssh_runner.go:195] Run: crio --version
	I1014 20:20:30.746004  489898 ssh_runner.go:195] Run: crio --version
	I1014 20:20:30.777705  489898 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1014 20:20:30.778957  489898 cli_runner.go:164] Run: docker network inspect ha-579393 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1014 20:20:30.796976  489898 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1014 20:20:30.801378  489898 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 20:20:30.812065  489898 kubeadm.go:883] updating cluster {Name:ha-579393 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-579393 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClien
tPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 20:20:30.812194  489898 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 20:20:30.812256  489898 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 20:20:30.843803  489898 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 20:20:30.843825  489898 crio.go:433] Images already preloaded, skipping extraction
	I1014 20:20:30.843871  489898 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 20:20:30.870297  489898 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 20:20:30.870318  489898 cache_images.go:85] Images are preloaded, skipping loading
	I1014 20:20:30.870326  489898 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1014 20:20:30.870413  489898 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-579393 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-579393 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 20:20:30.870472  489898 ssh_runner.go:195] Run: crio config
	I1014 20:20:30.916212  489898 cni.go:84] Creating CNI manager for ""
	I1014 20:20:30.916239  489898 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1014 20:20:30.916269  489898 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1014 20:20:30.916293  489898 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-579393 NodeName:ha-579393 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1014 20:20:30.916410  489898 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-579393"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 20:20:30.916472  489898 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1014 20:20:30.925261  489898 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 20:20:30.925338  489898 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1014 20:20:30.933417  489898 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1014 20:20:30.946346  489898 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 20:20:30.959345  489898 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1014 20:20:30.972547  489898 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1014 20:20:30.976536  489898 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 20:20:30.987410  489898 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 20:20:31.067104  489898 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 20:20:31.089513  489898 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393 for IP: 192.168.49.2
	I1014 20:20:31.089537  489898 certs.go:195] generating shared ca certs ...
	I1014 20:20:31.089557  489898 certs.go:227] acquiring lock for ca certs: {Name:mk332cc83f1e1747534fc81e896bed94f24941d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:20:31.089728  489898 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.key
	I1014 20:20:31.089804  489898 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.key
	I1014 20:20:31.089820  489898 certs.go:257] generating profile certs ...
	I1014 20:20:31.089945  489898 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/client.key
	I1014 20:20:31.090021  489898 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key.d4ee92c1
	I1014 20:20:31.090072  489898 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.key
	I1014 20:20:31.090088  489898 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1014 20:20:31.090106  489898 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1014 20:20:31.090118  489898 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1014 20:20:31.090131  489898 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1014 20:20:31.090142  489898 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1014 20:20:31.090156  489898 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1014 20:20:31.090168  489898 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1014 20:20:31.090182  489898 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1014 20:20:31.090241  489898 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/417373.pem (1338 bytes)
	W1014 20:20:31.090277  489898 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-413763/.minikube/certs/417373_empty.pem, impossibly tiny 0 bytes
	I1014 20:20:31.090288  489898 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca-key.pem (1675 bytes)
	I1014 20:20:31.090313  489898 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem (1078 bytes)
	I1014 20:20:31.090343  489898 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem (1123 bytes)
	I1014 20:20:31.090372  489898 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem (1675 bytes)
	I1014 20:20:31.090421  489898 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem (1708 bytes)
	I1014 20:20:31.090453  489898 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem -> /usr/share/ca-certificates/4173732.pem
	I1014 20:20:31.090470  489898 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:20:31.090487  489898 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/417373.pem -> /usr/share/ca-certificates/417373.pem
	I1014 20:20:31.091297  489898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 20:20:31.111369  489898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1014 20:20:31.131215  489898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 20:20:31.152691  489898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1014 20:20:31.177685  489898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1014 20:20:31.197344  489898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1014 20:20:31.216500  489898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 20:20:31.234564  489898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1014 20:20:31.252166  489898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem --> /usr/share/ca-certificates/4173732.pem (1708 bytes)
	I1014 20:20:31.269606  489898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 20:20:31.288166  489898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/certs/417373.pem --> /usr/share/ca-certificates/417373.pem (1338 bytes)
	I1014 20:20:31.305894  489898 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 20:20:31.318425  489898 ssh_runner.go:195] Run: openssl version
	I1014 20:20:31.324791  489898 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4173732.pem && ln -fs /usr/share/ca-certificates/4173732.pem /etc/ssl/certs/4173732.pem"
	I1014 20:20:31.333410  489898 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4173732.pem
	I1014 20:20:31.337628  489898 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 19:32 /usr/share/ca-certificates/4173732.pem
	I1014 20:20:31.337704  489898 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4173732.pem
	I1014 20:20:31.372321  489898 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4173732.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 20:20:31.381116  489898 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 20:20:31.390138  489898 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:20:31.394052  489898 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 19:15 /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:20:31.394109  489898 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:20:31.429938  489898 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 20:20:31.438655  489898 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/417373.pem && ln -fs /usr/share/ca-certificates/417373.pem /etc/ssl/certs/417373.pem"
	I1014 20:20:31.447298  489898 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/417373.pem
	I1014 20:20:31.451279  489898 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 19:32 /usr/share/ca-certificates/417373.pem
	I1014 20:20:31.451343  489898 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/417373.pem
	I1014 20:20:31.485062  489898 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/417373.pem /etc/ssl/certs/51391683.0"
	I1014 20:20:31.493976  489898 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 20:20:31.498163  489898 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1014 20:20:31.532437  489898 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1014 20:20:31.569216  489898 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1014 20:20:31.605892  489898 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1014 20:20:31.653534  489898 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1014 20:20:31.690955  489898 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1014 20:20:31.725979  489898 kubeadm.go:400] StartCluster: {Name:ha-579393 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-579393 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPa
th: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 20:20:31.726143  489898 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 20:20:31.726202  489898 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 20:20:31.755641  489898 cri.go:89] found id: ""
	I1014 20:20:31.755728  489898 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 20:20:31.764571  489898 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1014 20:20:31.764596  489898 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1014 20:20:31.764641  489898 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1014 20:20:31.772544  489898 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1014 20:20:31.772997  489898 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-579393" does not appear in /home/jenkins/minikube-integration/21409-413763/kubeconfig
	I1014 20:20:31.773109  489898 kubeconfig.go:62] /home/jenkins/minikube-integration/21409-413763/kubeconfig needs updating (will repair): [kubeconfig missing "ha-579393" cluster setting kubeconfig missing "ha-579393" context setting]
	I1014 20:20:31.773353  489898 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/kubeconfig: {Name:mk8f52e50cf00a1f90024c40a36e49753857e33b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:20:31.773843  489898 kapi.go:59] client config for ha-579393: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/client.crt", KeyFile:"/home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/client.key", CAFile:"/home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1014 20:20:31.774269  489898 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1014 20:20:31.774283  489898 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1014 20:20:31.774287  489898 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1014 20:20:31.774291  489898 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1014 20:20:31.774297  489898 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1014 20:20:31.774312  489898 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1014 20:20:31.774673  489898 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1014 20:20:31.783543  489898 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1014 20:20:31.783582  489898 kubeadm.go:601] duration metric: took 18.979903ms to restartPrimaryControlPlane
	I1014 20:20:31.783595  489898 kubeadm.go:402] duration metric: took 57.628352ms to StartCluster
	I1014 20:20:31.783616  489898 settings.go:142] acquiring lock: {Name:mke99e63954bc0385c76f9fa1a80091fa7740a6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:20:31.783711  489898 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21409-413763/kubeconfig
	I1014 20:20:31.784245  489898 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/kubeconfig: {Name:mk8f52e50cf00a1f90024c40a36e49753857e33b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:20:31.784483  489898 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 20:20:31.784537  489898 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1014 20:20:31.784634  489898 addons.go:69] Setting storage-provisioner=true in profile "ha-579393"
	I1014 20:20:31.784650  489898 addons.go:69] Setting default-storageclass=true in profile "ha-579393"
	I1014 20:20:31.784678  489898 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-579393"
	I1014 20:20:31.784687  489898 config.go:182] Loaded profile config "ha-579393": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:20:31.784656  489898 addons.go:238] Setting addon storage-provisioner=true in "ha-579393"
	I1014 20:20:31.784839  489898 host.go:66] Checking if "ha-579393" exists ...
	I1014 20:20:31.784988  489898 cli_runner.go:164] Run: docker container inspect ha-579393 --format={{.State.Status}}
	I1014 20:20:31.785316  489898 cli_runner.go:164] Run: docker container inspect ha-579393 --format={{.State.Status}}
	I1014 20:20:31.789929  489898 out.go:179] * Verifying Kubernetes components...
	I1014 20:20:31.791591  489898 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 20:20:31.805965  489898 kapi.go:59] client config for ha-579393: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/client.crt", KeyFile:"/home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/client.key", CAFile:"/home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1014 20:20:31.806386  489898 addons.go:238] Setting addon default-storageclass=true in "ha-579393"
	I1014 20:20:31.806441  489898 host.go:66] Checking if "ha-579393" exists ...
	I1014 20:20:31.806931  489898 cli_runner.go:164] Run: docker container inspect ha-579393 --format={{.State.Status}}
	I1014 20:20:31.807584  489898 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 20:20:31.809119  489898 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 20:20:31.809148  489898 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1014 20:20:31.809214  489898 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:20:31.832877  489898 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1014 20:20:31.832915  489898 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1014 20:20:31.832999  489898 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:20:31.836985  489898 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:20:31.854396  489898 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:20:31.900722  489898 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 20:20:31.915259  489898 node_ready.go:35] waiting up to 6m0s for node "ha-579393" to be "Ready" ...
	I1014 20:20:31.948248  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 20:20:31.965203  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:20:32.010301  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:32.010345  489898 retry.go:31] will retry after 180.735659ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:20:32.026606  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:32.026655  489898 retry.go:31] will retry after 185.14299ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:32.191908  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 20:20:32.212727  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:20:32.261347  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:32.261379  489898 retry.go:31] will retry after 400.487372ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:20:32.273847  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:32.273879  489898 retry.go:31] will retry after 332.539123ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:32.606897  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:20:32.660842  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:32.660884  489898 retry.go:31] will retry after 506.115799ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:32.662966  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1014 20:20:32.717555  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:32.717593  489898 retry.go:31] will retry after 698.279488ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:33.167777  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:20:33.223185  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:33.223218  489898 retry.go:31] will retry after 929.627856ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:33.416016  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1014 20:20:33.471972  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:33.472005  489898 retry.go:31] will retry after 760.905339ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:20:33.916053  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:20:34.153507  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:20:34.208070  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:34.208106  489898 retry.go:31] will retry after 1.612829525s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:34.233328  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1014 20:20:34.287658  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:34.287702  489898 retry.go:31] will retry after 818.99186ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:35.107035  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1014 20:20:35.161369  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:35.161406  489898 retry.go:31] will retry after 2.372177473s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:35.821805  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:20:35.876422  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:35.876462  489898 retry.go:31] will retry after 1.76203735s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:20:36.416224  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:20:37.533877  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1014 20:20:37.589802  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:37.589836  489898 retry.go:31] will retry after 2.151742617s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:37.639147  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:20:37.694173  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:37.694209  489898 retry.go:31] will retry after 2.414165218s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:20:38.916349  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:20:39.741973  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1014 20:20:39.798810  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:39.798851  489898 retry.go:31] will retry after 6.380239181s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:40.109367  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:20:40.165446  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:40.165488  489898 retry.go:31] will retry after 4.273629229s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:20:40.916572  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:20:43.416160  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:20:44.439617  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:20:44.495805  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:44.495847  489898 retry.go:31] will retry after 5.884728712s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:20:45.916420  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:20:46.179913  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1014 20:20:46.236772  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:46.236810  489898 retry.go:31] will retry after 6.359293031s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:20:48.416258  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:20:50.381581  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:20:50.416856  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:20:50.439004  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:50.439036  489898 retry.go:31] will retry after 11.771270745s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:52.597189  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1014 20:20:52.652445  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:52.652476  489898 retry.go:31] will retry after 10.720617277s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:20:52.916399  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:20:54.916966  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:20:57.416509  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:20:59.416864  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:21:01.916327  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:21:02.210789  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:21:02.266987  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:21:02.267021  489898 retry.go:31] will retry after 17.660934523s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:21:03.373440  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1014 20:21:03.428855  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:21:03.428886  489898 retry.go:31] will retry after 19.842704585s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:21:04.416008  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:21:06.416555  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:21:08.916547  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:21:11.416206  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:21:13.416608  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:21:15.916358  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:21:18.416349  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:21:19.929156  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:21:19.984438  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:21:19.984472  489898 retry.go:31] will retry after 17.500549438s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:21:20.416573  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:21:22.916397  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:21:23.271863  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1014 20:21:23.329260  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:21:23.329293  489898 retry.go:31] will retry after 15.097428161s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:21:24.916706  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:21:27.416721  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:21:29.916916  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:21:32.416582  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:21:34.916674  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:21:37.416493  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:21:37.485708  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:21:37.543070  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:21:37.543103  489898 retry.go:31] will retry after 40.949070497s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:21:38.427486  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1014 20:21:38.483097  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:21:38.483131  489898 retry.go:31] will retry after 43.966081483s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:21:39.916663  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:21:42.416063  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:21:44.416626  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:21:46.916599  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:21:49.416540  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:21:51.916813  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:21:54.416366  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:21:56.916158  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:21:58.916828  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:22:01.416335  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:22:03.916080  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:22:05.916821  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:22:08.416505  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:22:10.916302  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:22:13.416113  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:22:15.416873  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:22:17.915856  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:22:18.493252  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:22:18.550465  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:22:18.550634  489898 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1014 20:22:19.916885  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:22:22.415932  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:22:22.450166  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1014 20:22:22.505420  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:22:22.505542  489898 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1014 20:22:22.507884  489898 out.go:179] * Enabled addons: 
	I1014 20:22:22.509381  489898 addons.go:514] duration metric: took 1m50.724843787s for enable addons: enabled=[]
	W1014 20:22:24.416034  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:22:26.416184  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:22:28.416951  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:22:30.915926  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:22:32.916501  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:22:35.416234  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:22:37.916248  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:22:40.416105  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:22:42.416850  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:22:44.916913  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:22:47.416908  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:22:49.916150  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:22:52.416224  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:22:54.916288  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:22:57.416136  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:22:59.916282  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:23:02.416423  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:23:04.916846  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:23:07.416742  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:23:09.916646  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:23:12.416492  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:23:14.916627  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:23:17.416573  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:23:19.916979  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:23:22.416907  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:23:24.916676  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:23:27.416100  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:23:29.416688  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:23:31.916476  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:23:33.916684  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:23:36.415978  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:23:38.416324  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:23:40.916275  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:23:43.416374  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:23:45.916584  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:23:48.416514  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:23:50.916574  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:23:53.416488  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:23:55.916368  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:23:58.416228  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:00.916323  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:03.416241  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:05.916163  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:08.416132  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:10.416813  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:12.916784  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:15.416869  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:17.916799  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:20.416685  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:22.916800  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:25.416843  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:27.916316  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:29.916868  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:32.416204  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:34.416320  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:36.416834  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:38.916212  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:40.916807  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:43.416048  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:45.916074  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:47.916191  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:49.916569  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:52.415930  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:54.916217  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:57.415888  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:59.416062  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:01.416475  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:03.916023  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:05.916303  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:08.416300  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:10.416919  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:12.916244  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:14.916535  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:16.916873  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:19.416132  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:21.416381  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:23.915957  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:25.916320  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:28.416305  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:30.416865  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:32.916038  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:34.916413  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:37.416363  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:39.916738  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:42.416168  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:44.916517  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:47.416590  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:49.416883  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:51.916394  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:54.416277  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:56.416473  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:58.916319  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:26:01.415934  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:26:03.416196  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:26:05.416655  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:26:07.916910  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:26:10.416263  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:26:12.416368  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:26:14.916632  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:26:16.916923  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:26:19.415911  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:26:21.416220  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:26:23.416304  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:26:25.416816  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:26:27.916321  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:26:29.916832  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:26:31.915871  489898 node_ready.go:38] duration metric: took 6m0.000553348s for node "ha-579393" to be "Ready" ...
	I1014 20:26:31.918599  489898 out.go:203] 
	W1014 20:26:31.920009  489898 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1014 20:26:31.920031  489898 out.go:285] * 
	W1014 20:26:31.921790  489898 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 20:26:31.923205  489898 out.go:203] 
	
	
	==> CRI-O <==
	Oct 14 20:26:23 ha-579393 crio[522]: time="2025-10-14T20:26:23.206804664Z" level=info msg="createCtr: removing container 9d48a70113549e3f7d42603bc93ab58215496d1a2be0313bfda784ca93c63048" id=e8033ccc-5e2e-46a7-b2da-b366e6639b16 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:26:23 ha-579393 crio[522]: time="2025-10-14T20:26:23.206850624Z" level=info msg="createCtr: deleting container 9d48a70113549e3f7d42603bc93ab58215496d1a2be0313bfda784ca93c63048 from storage" id=e8033ccc-5e2e-46a7-b2da-b366e6639b16 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:26:23 ha-579393 crio[522]: time="2025-10-14T20:26:23.209012681Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-579393_kube-system_514451ea1eb9e52e24cc36daace2ea4a_0" id=e8033ccc-5e2e-46a7-b2da-b366e6639b16 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:26:30 ha-579393 crio[522]: time="2025-10-14T20:26:30.183633141Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=167c86e5-60b3-47f2-b9b8-98db022ed999 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 20:26:30 ha-579393 crio[522]: time="2025-10-14T20:26:30.184723744Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=892d8b85-81c2-4b15-90ff-ef55e352edc6 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 20:26:30 ha-579393 crio[522]: time="2025-10-14T20:26:30.185932607Z" level=info msg="Creating container: kube-system/kube-apiserver-ha-579393/kube-apiserver" id=d2ebc873-5f97-487a-b032-921c5db7d287 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:26:30 ha-579393 crio[522]: time="2025-10-14T20:26:30.186213766Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:26:30 ha-579393 crio[522]: time="2025-10-14T20:26:30.190737476Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:26:30 ha-579393 crio[522]: time="2025-10-14T20:26:30.191229955Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:26:30 ha-579393 crio[522]: time="2025-10-14T20:26:30.207102111Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=d2ebc873-5f97-487a-b032-921c5db7d287 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:26:30 ha-579393 crio[522]: time="2025-10-14T20:26:30.208579169Z" level=info msg="createCtr: deleting container ID 688827a153cab44b50f97bc346359c3a2feba1bdd0b8a7d0d006066ccf422f53 from idIndex" id=d2ebc873-5f97-487a-b032-921c5db7d287 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:26:30 ha-579393 crio[522]: time="2025-10-14T20:26:30.208619796Z" level=info msg="createCtr: removing container 688827a153cab44b50f97bc346359c3a2feba1bdd0b8a7d0d006066ccf422f53" id=d2ebc873-5f97-487a-b032-921c5db7d287 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:26:30 ha-579393 crio[522]: time="2025-10-14T20:26:30.208664794Z" level=info msg="createCtr: deleting container 688827a153cab44b50f97bc346359c3a2feba1bdd0b8a7d0d006066ccf422f53 from storage" id=d2ebc873-5f97-487a-b032-921c5db7d287 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:26:30 ha-579393 crio[522]: time="2025-10-14T20:26:30.210726815Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-ha-579393_kube-system_06869f9c490a5fffb50940ac23939d18_0" id=d2ebc873-5f97-487a-b032-921c5db7d287 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:26:33 ha-579393 crio[522]: time="2025-10-14T20:26:33.182842612Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=78b11ab8-2c8f-4d70-b7bb-fbb373bcdebf name=/runtime.v1.ImageService/ImageStatus
	Oct 14 20:26:33 ha-579393 crio[522]: time="2025-10-14T20:26:33.183846581Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=85c6a0a3-0b4a-49b9-862c-b1ca332e0177 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 20:26:33 ha-579393 crio[522]: time="2025-10-14T20:26:33.184979721Z" level=info msg="Creating container: kube-system/etcd-ha-579393/etcd" id=34d4f8c9-2d27-4309-adbc-68d2f3f86188 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:26:33 ha-579393 crio[522]: time="2025-10-14T20:26:33.185248366Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:26:33 ha-579393 crio[522]: time="2025-10-14T20:26:33.189135788Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:26:33 ha-579393 crio[522]: time="2025-10-14T20:26:33.189599457Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:26:33 ha-579393 crio[522]: time="2025-10-14T20:26:33.2075276Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=34d4f8c9-2d27-4309-adbc-68d2f3f86188 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:26:33 ha-579393 crio[522]: time="2025-10-14T20:26:33.209025368Z" level=info msg="createCtr: deleting container ID f31321508e090210c9b8eb47ae6d0873bfe8914d8927824eb395e1ee47950a72 from idIndex" id=34d4f8c9-2d27-4309-adbc-68d2f3f86188 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:26:33 ha-579393 crio[522]: time="2025-10-14T20:26:33.209068332Z" level=info msg="createCtr: removing container f31321508e090210c9b8eb47ae6d0873bfe8914d8927824eb395e1ee47950a72" id=34d4f8c9-2d27-4309-adbc-68d2f3f86188 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:26:33 ha-579393 crio[522]: time="2025-10-14T20:26:33.209114024Z" level=info msg="createCtr: deleting container f31321508e090210c9b8eb47ae6d0873bfe8914d8927824eb395e1ee47950a72 from storage" id=34d4f8c9-2d27-4309-adbc-68d2f3f86188 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:26:33 ha-579393 crio[522]: time="2025-10-14T20:26:33.211623874Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-579393_kube-system_949fee8892a6b2444a3aa0dec92a7837_0" id=34d4f8c9-2d27-4309-adbc-68d2f3f86188 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 20:26:34.552559    2178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:26:34.553341    2178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:26:34.556213    2178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:26:34.556617    2178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:26:34.558027    2178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 91 42 a8 46 f5 08 06
	[ +33.952077] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff aa 17 60 38 7c 63 08 06
	[  +0.961658] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ba 77 81 98 05 a1 08 06
	[  +0.043334] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 8a 2f 57 55 4c d7 08 06
	[  +4.681081] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f6 ac 51 a4 b2 5a 08 06
	[Oct14 19:04] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e6 d1 de e1 85 25 08 06
	[  +0.899784] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ae f2 fe 51 3d 95 08 06
	[  +0.040749] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 82 c6 36 89 b8 4a 08 06
	[  +4.162603] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c6 62 d5 5c 0e ef 08 06
	[ +30.801371] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 06 ae e5 28 c7 39 08 06
	[  +0.817897] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ae 5e 5e 90 62 1a 08 06
	[  +0.046959] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 96 8f 1b bd b9 9f 08 06
	[Oct14 19:05] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 56 59 c3 7c db c4 08 06
	
	
	==> kernel <==
	 20:26:34 up  3:09,  0 user,  load average: 0.16, 0.07, 0.25
	Linux ha-579393 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 14 20:26:26 ha-579393 kubelet[673]: E1014 20:26:26.797604     673 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-579393.186e75138c8fc15b  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-579393,UID:ha-579393,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-579393 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-579393,},FirstTimestamp:2025-10-14 20:20:31.171502427 +0000 UTC m=+0.079567776,LastTimestamp:2025-10-14 20:20:31.171502427 +0000 UTC m=+0.079567776,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-579393,}"
	Oct 14 20:26:26 ha-579393 kubelet[673]: E1014 20:26:26.823387     673 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-579393?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 14 20:26:26 ha-579393 kubelet[673]: I1014 20:26:26.999561     673 kubelet_node_status.go:75] "Attempting to register node" node="ha-579393"
	Oct 14 20:26:27 ha-579393 kubelet[673]: E1014 20:26:26.999973     673 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-579393"
	Oct 14 20:26:30 ha-579393 kubelet[673]: E1014 20:26:30.183164     673 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-579393\" not found" node="ha-579393"
	Oct 14 20:26:30 ha-579393 kubelet[673]: E1014 20:26:30.211092     673 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 14 20:26:30 ha-579393 kubelet[673]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 14 20:26:30 ha-579393 kubelet[673]:  > podSandboxID="ace7e840fe529dab46ef907372a1c92e141c023a259dd49f8023c7b15dcf1a62"
	Oct 14 20:26:30 ha-579393 kubelet[673]: E1014 20:26:30.211196     673 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 14 20:26:30 ha-579393 kubelet[673]:         container kube-apiserver start failed in pod kube-apiserver-ha-579393_kube-system(06869f9c490a5fffb50940ac23939d18): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 14 20:26:30 ha-579393 kubelet[673]:  > logger="UnhandledError"
	Oct 14 20:26:30 ha-579393 kubelet[673]: E1014 20:26:30.211232     673 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-ha-579393" podUID="06869f9c490a5fffb50940ac23939d18"
	Oct 14 20:26:31 ha-579393 kubelet[673]: E1014 20:26:31.198909     673 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-579393\" not found"
	Oct 14 20:26:33 ha-579393 kubelet[673]: E1014 20:26:33.182260     673 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-579393\" not found" node="ha-579393"
	Oct 14 20:26:33 ha-579393 kubelet[673]: E1014 20:26:33.211988     673 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 14 20:26:33 ha-579393 kubelet[673]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 14 20:26:33 ha-579393 kubelet[673]:  > podSandboxID="0ae3992141a445a5fc4b4a1c62c57009afcf5eb3d3627a888843e967b225ebc0"
	Oct 14 20:26:33 ha-579393 kubelet[673]: E1014 20:26:33.212111     673 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 14 20:26:33 ha-579393 kubelet[673]:         container etcd start failed in pod etcd-ha-579393_kube-system(949fee8892a6b2444a3aa0dec92a7837): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 14 20:26:33 ha-579393 kubelet[673]:  > logger="UnhandledError"
	Oct 14 20:26:33 ha-579393 kubelet[673]: E1014 20:26:33.212148     673 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-ha-579393" podUID="949fee8892a6b2444a3aa0dec92a7837"
	Oct 14 20:26:33 ha-579393 kubelet[673]: E1014 20:26:33.824245     673 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-579393?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 14 20:26:34 ha-579393 kubelet[673]: I1014 20:26:34.002375     673 kubelet_node_status.go:75] "Attempting to register node" node="ha-579393"
	Oct 14 20:26:34 ha-579393 kubelet[673]: E1014 20:26:34.002751     673 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-579393"
	Oct 14 20:26:34 ha-579393 kubelet[673]: E1014 20:26:34.448000     673 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-579393 -n ha-579393
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-579393 -n ha-579393: exit status 2 (307.364601ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "ha-579393" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (1.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (1.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-579393 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-579393 node add --control-plane --alsologtostderr -v 5: exit status 103 (255.395962ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-579393 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p ha-579393"

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 20:26:35.003411  494549 out.go:360] Setting OutFile to fd 1 ...
	I1014 20:26:35.003717  494549 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:26:35.003729  494549 out.go:374] Setting ErrFile to fd 2...
	I1014 20:26:35.003734  494549 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:26:35.003969  494549 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-413763/.minikube/bin
	I1014 20:26:35.004275  494549 mustload.go:65] Loading cluster: ha-579393
	I1014 20:26:35.004699  494549 config.go:182] Loaded profile config "ha-579393": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:26:35.005155  494549 cli_runner.go:164] Run: docker container inspect ha-579393 --format={{.State.Status}}
	I1014 20:26:35.022876  494549 host.go:66] Checking if "ha-579393" exists ...
	I1014 20:26:35.023201  494549 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 20:26:35.081390  494549 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-14 20:26:35.070488645 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1014 20:26:35.081525  494549 api_server.go:166] Checking apiserver status ...
	I1014 20:26:35.081571  494549 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 20:26:35.081607  494549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:26:35.099178  494549 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	W1014 20:26:35.205689  494549 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1014 20:26:35.207786  494549 out.go:179] * The control-plane node ha-579393 apiserver is not running: (state=Stopped)
	I1014 20:26:35.209389  494549 out.go:179]   To start a cluster, run: "minikube start -p ha-579393"

                                                
                                                
** /stderr **
ha_test.go:609: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-linux-amd64 -p ha-579393 node add --control-plane --alsologtostderr -v 5" : exit status 103
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/AddSecondaryNode]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/AddSecondaryNode]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-579393
helpers_test.go:243: (dbg) docker inspect ha-579393:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2",
	        "Created": "2025-10-14T20:03:22.416166993Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 490098,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-14T20:20:25.14898301Z",
	            "FinishedAt": "2025-10-14T20:20:23.81110806Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2/hostname",
	        "HostsPath": "/var/lib/docker/containers/e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2/hosts",
	        "LogPath": "/var/lib/docker/containers/e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2/e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2-json.log",
	        "Name": "/ha-579393",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-579393:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-579393",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2",
	                "LowerDir": "/var/lib/docker/overlay2/f2862f7f555e5163c5fb738e59e646f87e5b6f6dc38621bbd5cf8a3ccf2dc40d-init/diff:/var/lib/docker/overlay2/51cca15559205c73df9571f03495ed8a29f085405673af5fef2c1ba7c695d8af/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f2862f7f555e5163c5fb738e59e646f87e5b6f6dc38621bbd5cf8a3ccf2dc40d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f2862f7f555e5163c5fb738e59e646f87e5b6f6dc38621bbd5cf8a3ccf2dc40d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f2862f7f555e5163c5fb738e59e646f87e5b6f6dc38621bbd5cf8a3ccf2dc40d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-579393",
	                "Source": "/var/lib/docker/volumes/ha-579393/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-579393",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-579393",
	                "name.minikube.sigs.k8s.io": "ha-579393",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c02e504f33501e83f3b9b4187e2f5d221a1e738b5d0f6faf24795ae2990234ba",
	            "SandboxKey": "/var/run/docker/netns/c02e504f3350",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32913"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32914"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32917"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32915"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32916"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-579393": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "42:54:39:d3:9a:de",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d686e445c44d566d468791372c74214f3cfc87adaf2fbedd68822ff7d3ce8955",
	                    "EndpointID": "aaae67063c867134e15c0950594bd5a6f0ea17d0626d59a73f395e78fd0d78e2",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-579393",
	                        "e999454f4483"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-579393 -n ha-579393
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-579393 -n ha-579393: exit status 2 (308.3902ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/AddSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/AddSecondaryNode]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-579393 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/AddSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                             ARGS                                             │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ kubectl │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:12 UTC │                     │
	│ kubectl │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:12 UTC │                     │
	│ kubectl │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:12 UTC │                     │
	│ kubectl │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ kubectl │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                        │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ kubectl │ ha-579393 kubectl -- exec  -- nslookup kubernetes.io                                         │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ kubectl │ ha-579393 kubectl -- exec  -- nslookup kubernetes.default                                    │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ kubectl │ ha-579393 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                  │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ kubectl │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                        │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ node    │ ha-579393 node add --alsologtostderr -v 5                                                    │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ node    │ ha-579393 node stop m02 --alsologtostderr -v 5                                               │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ node    │ ha-579393 node start m02 --alsologtostderr -v 5                                              │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ node    │ ha-579393 node list --alsologtostderr -v 5                                                   │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:14 UTC │                     │
	│ stop    │ ha-579393 stop --alsologtostderr -v 5                                                        │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:14 UTC │ 14 Oct 25 20:14 UTC │
	│ start   │ ha-579393 start --wait true --alsologtostderr -v 5                                           │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:14 UTC │                     │
	│ node    │ ha-579393 node list --alsologtostderr -v 5                                                   │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:20 UTC │                     │
	│ node    │ ha-579393 node delete m03 --alsologtostderr -v 5                                             │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:20 UTC │                     │
	│ stop    │ ha-579393 stop --alsologtostderr -v 5                                                        │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:20 UTC │ 14 Oct 25 20:20 UTC │
	│ start   │ ha-579393 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:20 UTC │                     │
	│ node    │ ha-579393 node add --control-plane --alsologtostderr -v 5                                    │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:26 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/14 20:20:24
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1014 20:20:24.896990  489898 out.go:360] Setting OutFile to fd 1 ...
	I1014 20:20:24.897284  489898 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:20:24.897296  489898 out.go:374] Setting ErrFile to fd 2...
	I1014 20:20:24.897302  489898 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:20:24.897542  489898 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-413763/.minikube/bin
	I1014 20:20:24.898101  489898 out.go:368] Setting JSON to false
	I1014 20:20:24.899124  489898 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":10971,"bootTime":1760462254,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1014 20:20:24.899246  489898 start.go:141] virtualization: kvm guest
	I1014 20:20:24.901656  489898 out.go:179] * [ha-579393] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1014 20:20:24.903433  489898 notify.go:220] Checking for updates...
	I1014 20:20:24.903468  489898 out.go:179]   - MINIKUBE_LOCATION=21409
	I1014 20:20:24.905313  489898 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 20:20:24.907144  489898 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-413763/kubeconfig
	I1014 20:20:24.908684  489898 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-413763/.minikube
	I1014 20:20:24.910248  489898 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1014 20:20:24.911693  489898 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 20:20:24.913427  489898 config.go:182] Loaded profile config "ha-579393": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:20:24.913995  489898 driver.go:421] Setting default libvirt URI to qemu:///system
	I1014 20:20:24.938824  489898 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1014 20:20:24.939019  489898 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 20:20:25.007228  489898 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-14 20:20:24.996574045 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1014 20:20:25.007350  489898 docker.go:318] overlay module found
	I1014 20:20:25.010286  489898 out.go:179] * Using the docker driver based on existing profile
	I1014 20:20:25.011792  489898 start.go:305] selected driver: docker
	I1014 20:20:25.011819  489898 start.go:925] validating driver "docker" against &{Name:ha-579393 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-579393 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 20:20:25.011930  489898 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 20:20:25.012031  489898 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 20:20:25.072380  489898 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-14 20:20:25.062155128 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1014 20:20:25.073231  489898 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 20:20:25.073267  489898 cni.go:84] Creating CNI manager for ""
	I1014 20:20:25.073308  489898 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1014 20:20:25.073364  489898 start.go:349] cluster config:
	{Name:ha-579393 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-579393 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GP
Us: AutoPauseInterval:1m0s}
	I1014 20:20:25.075991  489898 out.go:179] * Starting "ha-579393" primary control-plane node in "ha-579393" cluster
	I1014 20:20:25.077302  489898 cache.go:123] Beginning downloading kic base image for docker with crio
	I1014 20:20:25.079216  489898 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1014 20:20:25.080637  489898 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 20:20:25.080691  489898 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-413763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1014 20:20:25.080700  489898 cache.go:58] Caching tarball of preloaded images
	I1014 20:20:25.080769  489898 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1014 20:20:25.080800  489898 preload.go:233] Found /home/jenkins/minikube-integration/21409-413763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1014 20:20:25.080809  489898 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1014 20:20:25.080900  489898 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/config.json ...
	I1014 20:20:25.101692  489898 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1014 20:20:25.101730  489898 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1014 20:20:25.101767  489898 cache.go:232] Successfully downloaded all kic artifacts
	I1014 20:20:25.101806  489898 start.go:360] acquireMachinesLock for ha-579393: {Name:mk1f05a584fb3418ecbc78031a467c0e06c7a892 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 20:20:25.101915  489898 start.go:364] duration metric: took 60.146µs to acquireMachinesLock for "ha-579393"
	I1014 20:20:25.101941  489898 start.go:96] Skipping create...Using existing machine configuration
	I1014 20:20:25.101949  489898 fix.go:54] fixHost starting: 
	I1014 20:20:25.102193  489898 cli_runner.go:164] Run: docker container inspect ha-579393 --format={{.State.Status}}
	I1014 20:20:25.120107  489898 fix.go:112] recreateIfNeeded on ha-579393: state=Stopped err=<nil>
	W1014 20:20:25.120156  489898 fix.go:138] unexpected machine state, will restart: <nil>
	I1014 20:20:25.122045  489898 out.go:252] * Restarting existing docker container for "ha-579393" ...
	I1014 20:20:25.122122  489898 cli_runner.go:164] Run: docker start ha-579393
	I1014 20:20:25.362928  489898 cli_runner.go:164] Run: docker container inspect ha-579393 --format={{.State.Status}}
	I1014 20:20:25.381546  489898 kic.go:430] container "ha-579393" state is running.
	I1014 20:20:25.382015  489898 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-579393
	I1014 20:20:25.401922  489898 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/config.json ...
	I1014 20:20:25.402188  489898 machine.go:93] provisionDockerMachine start ...
	I1014 20:20:25.402264  489898 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:20:25.421465  489898 main.go:141] libmachine: Using SSH client type: native
	I1014 20:20:25.421725  489898 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32913 <nil> <nil>}
	I1014 20:20:25.421746  489898 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 20:20:25.422396  489898 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40524->127.0.0.1:32913: read: connection reset by peer
	I1014 20:20:28.574789  489898 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-579393
	
	I1014 20:20:28.574824  489898 ubuntu.go:182] provisioning hostname "ha-579393"
	I1014 20:20:28.574892  489898 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:20:28.593311  489898 main.go:141] libmachine: Using SSH client type: native
	I1014 20:20:28.593527  489898 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32913 <nil> <nil>}
	I1014 20:20:28.593539  489898 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-579393 && echo "ha-579393" | sudo tee /etc/hostname
	I1014 20:20:28.751227  489898 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-579393
	
	I1014 20:20:28.751331  489898 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:20:28.771390  489898 main.go:141] libmachine: Using SSH client type: native
	I1014 20:20:28.771598  489898 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32913 <nil> <nil>}
	I1014 20:20:28.771614  489898 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-579393' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-579393/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-579393' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 20:20:28.919191  489898 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 20:20:28.919232  489898 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-413763/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-413763/.minikube}
	I1014 20:20:28.919288  489898 ubuntu.go:190] setting up certificates
	I1014 20:20:28.919304  489898 provision.go:84] configureAuth start
	I1014 20:20:28.919374  489898 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-579393
	I1014 20:20:28.937555  489898 provision.go:143] copyHostCerts
	I1014 20:20:28.937600  489898 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem
	I1014 20:20:28.937642  489898 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem, removing ...
	I1014 20:20:28.937656  489898 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem
	I1014 20:20:28.937748  489898 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem (1078 bytes)
	I1014 20:20:28.938006  489898 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem
	I1014 20:20:28.938042  489898 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem, removing ...
	I1014 20:20:28.938054  489898 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem
	I1014 20:20:28.938107  489898 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem (1123 bytes)
	I1014 20:20:28.938179  489898 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem
	I1014 20:20:28.938205  489898 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem, removing ...
	I1014 20:20:28.938214  489898 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem
	I1014 20:20:28.938254  489898 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem (1675 bytes)
	I1014 20:20:28.938327  489898 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca-key.pem org=jenkins.ha-579393 san=[127.0.0.1 192.168.49.2 ha-579393 localhost minikube]
	I1014 20:20:28.984141  489898 provision.go:177] copyRemoteCerts
	I1014 20:20:28.984206  489898 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 20:20:28.984251  489898 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:20:29.002844  489898 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:20:29.106575  489898 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1014 20:20:29.106640  489898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1014 20:20:29.125278  489898 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1014 20:20:29.125393  489898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 20:20:29.144458  489898 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1014 20:20:29.144532  489898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1014 20:20:29.163285  489898 provision.go:87] duration metric: took 243.963585ms to configureAuth
	I1014 20:20:29.163319  489898 ubuntu.go:206] setting minikube options for container-runtime
	I1014 20:20:29.163543  489898 config.go:182] Loaded profile config "ha-579393": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:20:29.163679  489898 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:20:29.182115  489898 main.go:141] libmachine: Using SSH client type: native
	I1014 20:20:29.182329  489898 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32913 <nil> <nil>}
	I1014 20:20:29.182344  489898 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 20:20:29.446950  489898 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 20:20:29.446980  489898 machine.go:96] duration metric: took 4.044773675s to provisionDockerMachine
	I1014 20:20:29.446995  489898 start.go:293] postStartSetup for "ha-579393" (driver="docker")
	I1014 20:20:29.447007  489898 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 20:20:29.447058  489898 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 20:20:29.447097  489898 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:20:29.465397  489898 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:20:29.570738  489898 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 20:20:29.574668  489898 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1014 20:20:29.574700  489898 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1014 20:20:29.574712  489898 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-413763/.minikube/addons for local assets ...
	I1014 20:20:29.574793  489898 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-413763/.minikube/files for local assets ...
	I1014 20:20:29.574907  489898 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem -> 4173732.pem in /etc/ssl/certs
	I1014 20:20:29.574923  489898 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem -> /etc/ssl/certs/4173732.pem
	I1014 20:20:29.575031  489898 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 20:20:29.583269  489898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem --> /etc/ssl/certs/4173732.pem (1708 bytes)
	I1014 20:20:29.600614  489898 start.go:296] duration metric: took 153.60445ms for postStartSetup
	I1014 20:20:29.600725  489898 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1014 20:20:29.600803  489898 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:20:29.618422  489898 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:20:29.719349  489898 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1014 20:20:29.724065  489898 fix.go:56] duration metric: took 4.622108754s for fixHost
	I1014 20:20:29.724091  489898 start.go:83] releasing machines lock for "ha-579393", held for 4.622163128s
	I1014 20:20:29.724158  489898 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-579393
	I1014 20:20:29.742263  489898 ssh_runner.go:195] Run: cat /version.json
	I1014 20:20:29.742292  489898 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 20:20:29.742312  489898 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:20:29.742360  489898 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:20:29.760806  489898 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:20:29.761892  489898 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:20:29.861522  489898 ssh_runner.go:195] Run: systemctl --version
	I1014 20:20:29.920675  489898 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 20:20:29.958043  489898 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 20:20:29.963013  489898 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 20:20:29.963081  489898 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 20:20:29.971684  489898 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1014 20:20:29.971715  489898 start.go:495] detecting cgroup driver to use...
	I1014 20:20:29.971777  489898 detect.go:190] detected "systemd" cgroup driver on host os
	I1014 20:20:29.971827  489898 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 20:20:29.986651  489898 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 20:20:29.999493  489898 docker.go:218] disabling cri-docker service (if available) ...
	I1014 20:20:29.999555  489898 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 20:20:30.014987  489898 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 20:20:30.028206  489898 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 20:20:30.108561  489898 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 20:20:30.189013  489898 docker.go:234] disabling docker service ...
	I1014 20:20:30.189092  489898 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 20:20:30.205263  489898 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 20:20:30.218011  489898 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 20:20:30.297456  489898 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 20:20:30.378372  489898 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 20:20:30.391541  489898 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 20:20:30.406068  489898 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1014 20:20:30.406139  489898 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:20:30.415378  489898 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1014 20:20:30.415458  489898 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:20:30.425041  489898 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:20:30.434283  489898 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:20:30.443270  489898 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 20:20:30.451367  489898 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:20:30.460460  489898 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:20:30.469171  489898 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:20:30.478459  489898 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 20:20:30.486229  489898 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 20:20:30.493996  489898 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 20:20:30.573307  489898 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 20:20:30.683147  489898 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 20:20:30.683209  489898 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 20:20:30.687341  489898 start.go:563] Will wait 60s for crictl version
	I1014 20:20:30.687394  489898 ssh_runner.go:195] Run: which crictl
	I1014 20:20:30.690908  489898 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1014 20:20:30.716598  489898 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1014 20:20:30.716668  489898 ssh_runner.go:195] Run: crio --version
	I1014 20:20:30.746004  489898 ssh_runner.go:195] Run: crio --version
	I1014 20:20:30.777705  489898 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1014 20:20:30.778957  489898 cli_runner.go:164] Run: docker network inspect ha-579393 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1014 20:20:30.796976  489898 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1014 20:20:30.801378  489898 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 20:20:30.812065  489898 kubeadm.go:883] updating cluster {Name:ha-579393 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-579393 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClien
tPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 20:20:30.812194  489898 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 20:20:30.812256  489898 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 20:20:30.843803  489898 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 20:20:30.843825  489898 crio.go:433] Images already preloaded, skipping extraction
	I1014 20:20:30.843871  489898 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 20:20:30.870297  489898 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 20:20:30.870318  489898 cache_images.go:85] Images are preloaded, skipping loading
	I1014 20:20:30.870326  489898 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1014 20:20:30.870413  489898 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-579393 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-579393 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 20:20:30.870472  489898 ssh_runner.go:195] Run: crio config
	I1014 20:20:30.916212  489898 cni.go:84] Creating CNI manager for ""
	I1014 20:20:30.916239  489898 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1014 20:20:30.916269  489898 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1014 20:20:30.916293  489898 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-579393 NodeName:ha-579393 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1014 20:20:30.916410  489898 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-579393"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 20:20:30.916472  489898 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1014 20:20:30.925261  489898 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 20:20:30.925338  489898 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1014 20:20:30.933417  489898 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1014 20:20:30.946346  489898 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 20:20:30.959345  489898 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1014 20:20:30.972547  489898 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1014 20:20:30.976536  489898 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 20:20:30.987410  489898 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 20:20:31.067104  489898 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 20:20:31.089513  489898 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393 for IP: 192.168.49.2
	I1014 20:20:31.089537  489898 certs.go:195] generating shared ca certs ...
	I1014 20:20:31.089557  489898 certs.go:227] acquiring lock for ca certs: {Name:mk332cc83f1e1747534fc81e896bed94f24941d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:20:31.089728  489898 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.key
	I1014 20:20:31.089804  489898 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.key
	I1014 20:20:31.089820  489898 certs.go:257] generating profile certs ...
	I1014 20:20:31.089945  489898 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/client.key
	I1014 20:20:31.090021  489898 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key.d4ee92c1
	I1014 20:20:31.090072  489898 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.key
	I1014 20:20:31.090088  489898 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1014 20:20:31.090106  489898 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1014 20:20:31.090118  489898 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1014 20:20:31.090131  489898 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1014 20:20:31.090142  489898 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1014 20:20:31.090156  489898 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1014 20:20:31.090168  489898 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1014 20:20:31.090182  489898 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1014 20:20:31.090241  489898 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/417373.pem (1338 bytes)
	W1014 20:20:31.090277  489898 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-413763/.minikube/certs/417373_empty.pem, impossibly tiny 0 bytes
	I1014 20:20:31.090288  489898 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca-key.pem (1675 bytes)
	I1014 20:20:31.090313  489898 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem (1078 bytes)
	I1014 20:20:31.090343  489898 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem (1123 bytes)
	I1014 20:20:31.090372  489898 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem (1675 bytes)
	I1014 20:20:31.090421  489898 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem (1708 bytes)
	I1014 20:20:31.090453  489898 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem -> /usr/share/ca-certificates/4173732.pem
	I1014 20:20:31.090470  489898 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:20:31.090487  489898 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/417373.pem -> /usr/share/ca-certificates/417373.pem
	I1014 20:20:31.091297  489898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 20:20:31.111369  489898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1014 20:20:31.131215  489898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 20:20:31.152691  489898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1014 20:20:31.177685  489898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1014 20:20:31.197344  489898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1014 20:20:31.216500  489898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 20:20:31.234564  489898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1014 20:20:31.252166  489898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem --> /usr/share/ca-certificates/4173732.pem (1708 bytes)
	I1014 20:20:31.269606  489898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 20:20:31.288166  489898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/certs/417373.pem --> /usr/share/ca-certificates/417373.pem (1338 bytes)
	I1014 20:20:31.305894  489898 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 20:20:31.318425  489898 ssh_runner.go:195] Run: openssl version
	I1014 20:20:31.324791  489898 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4173732.pem && ln -fs /usr/share/ca-certificates/4173732.pem /etc/ssl/certs/4173732.pem"
	I1014 20:20:31.333410  489898 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4173732.pem
	I1014 20:20:31.337628  489898 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 19:32 /usr/share/ca-certificates/4173732.pem
	I1014 20:20:31.337704  489898 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4173732.pem
	I1014 20:20:31.372321  489898 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4173732.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 20:20:31.381116  489898 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 20:20:31.390138  489898 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:20:31.394052  489898 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 19:15 /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:20:31.394109  489898 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:20:31.429938  489898 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 20:20:31.438655  489898 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/417373.pem && ln -fs /usr/share/ca-certificates/417373.pem /etc/ssl/certs/417373.pem"
	I1014 20:20:31.447298  489898 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/417373.pem
	I1014 20:20:31.451279  489898 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 19:32 /usr/share/ca-certificates/417373.pem
	I1014 20:20:31.451343  489898 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/417373.pem
	I1014 20:20:31.485062  489898 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/417373.pem /etc/ssl/certs/51391683.0"
	I1014 20:20:31.493976  489898 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 20:20:31.498163  489898 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1014 20:20:31.532437  489898 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1014 20:20:31.569216  489898 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1014 20:20:31.605892  489898 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1014 20:20:31.653534  489898 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1014 20:20:31.690955  489898 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1014 20:20:31.725979  489898 kubeadm.go:400] StartCluster: {Name:ha-579393 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-579393 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPa
th: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 20:20:31.726143  489898 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 20:20:31.726202  489898 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 20:20:31.755641  489898 cri.go:89] found id: ""
	I1014 20:20:31.755728  489898 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 20:20:31.764571  489898 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1014 20:20:31.764596  489898 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1014 20:20:31.764641  489898 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1014 20:20:31.772544  489898 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1014 20:20:31.772997  489898 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-579393" does not appear in /home/jenkins/minikube-integration/21409-413763/kubeconfig
	I1014 20:20:31.773109  489898 kubeconfig.go:62] /home/jenkins/minikube-integration/21409-413763/kubeconfig needs updating (will repair): [kubeconfig missing "ha-579393" cluster setting kubeconfig missing "ha-579393" context setting]
	I1014 20:20:31.773353  489898 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/kubeconfig: {Name:mk8f52e50cf00a1f90024c40a36e49753857e33b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:20:31.773843  489898 kapi.go:59] client config for ha-579393: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/client.crt", KeyFile:"/home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/client.key", CAFile:"/home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1014 20:20:31.774269  489898 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1014 20:20:31.774283  489898 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1014 20:20:31.774287  489898 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1014 20:20:31.774291  489898 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1014 20:20:31.774297  489898 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1014 20:20:31.774312  489898 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1014 20:20:31.774673  489898 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1014 20:20:31.783543  489898 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1014 20:20:31.783582  489898 kubeadm.go:601] duration metric: took 18.979903ms to restartPrimaryControlPlane
	I1014 20:20:31.783595  489898 kubeadm.go:402] duration metric: took 57.628352ms to StartCluster
	I1014 20:20:31.783616  489898 settings.go:142] acquiring lock: {Name:mke99e63954bc0385c76f9fa1a80091fa7740a6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:20:31.783711  489898 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21409-413763/kubeconfig
	I1014 20:20:31.784245  489898 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/kubeconfig: {Name:mk8f52e50cf00a1f90024c40a36e49753857e33b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:20:31.784483  489898 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 20:20:31.784537  489898 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1014 20:20:31.784634  489898 addons.go:69] Setting storage-provisioner=true in profile "ha-579393"
	I1014 20:20:31.784650  489898 addons.go:69] Setting default-storageclass=true in profile "ha-579393"
	I1014 20:20:31.784678  489898 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-579393"
	I1014 20:20:31.784687  489898 config.go:182] Loaded profile config "ha-579393": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:20:31.784656  489898 addons.go:238] Setting addon storage-provisioner=true in "ha-579393"
	I1014 20:20:31.784839  489898 host.go:66] Checking if "ha-579393" exists ...
	I1014 20:20:31.784988  489898 cli_runner.go:164] Run: docker container inspect ha-579393 --format={{.State.Status}}
	I1014 20:20:31.785316  489898 cli_runner.go:164] Run: docker container inspect ha-579393 --format={{.State.Status}}
	I1014 20:20:31.789929  489898 out.go:179] * Verifying Kubernetes components...
	I1014 20:20:31.791591  489898 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 20:20:31.805965  489898 kapi.go:59] client config for ha-579393: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/client.crt", KeyFile:"/home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/client.key", CAFile:"/home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1014 20:20:31.806386  489898 addons.go:238] Setting addon default-storageclass=true in "ha-579393"
	I1014 20:20:31.806441  489898 host.go:66] Checking if "ha-579393" exists ...
	I1014 20:20:31.806931  489898 cli_runner.go:164] Run: docker container inspect ha-579393 --format={{.State.Status}}
	I1014 20:20:31.807584  489898 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 20:20:31.809119  489898 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 20:20:31.809148  489898 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1014 20:20:31.809214  489898 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:20:31.832877  489898 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1014 20:20:31.832915  489898 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1014 20:20:31.832999  489898 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:20:31.836985  489898 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:20:31.854396  489898 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:20:31.900722  489898 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 20:20:31.915259  489898 node_ready.go:35] waiting up to 6m0s for node "ha-579393" to be "Ready" ...
	I1014 20:20:31.948248  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 20:20:31.965203  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:20:32.010301  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:32.010345  489898 retry.go:31] will retry after 180.735659ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:20:32.026606  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:32.026655  489898 retry.go:31] will retry after 185.14299ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:32.191908  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 20:20:32.212727  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:20:32.261347  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:32.261379  489898 retry.go:31] will retry after 400.487372ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:20:32.273847  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:32.273879  489898 retry.go:31] will retry after 332.539123ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:32.606897  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:20:32.660842  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:32.660884  489898 retry.go:31] will retry after 506.115799ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:32.662966  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1014 20:20:32.717555  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:32.717593  489898 retry.go:31] will retry after 698.279488ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:33.167777  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:20:33.223185  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:33.223218  489898 retry.go:31] will retry after 929.627856ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:33.416016  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1014 20:20:33.471972  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:33.472005  489898 retry.go:31] will retry after 760.905339ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:20:33.916053  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:20:34.153507  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:20:34.208070  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:34.208106  489898 retry.go:31] will retry after 1.612829525s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:34.233328  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1014 20:20:34.287658  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:34.287702  489898 retry.go:31] will retry after 818.99186ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:35.107035  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1014 20:20:35.161369  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:35.161406  489898 retry.go:31] will retry after 2.372177473s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:35.821805  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:20:35.876422  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:35.876462  489898 retry.go:31] will retry after 1.76203735s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:20:36.416224  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:20:37.533877  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1014 20:20:37.589802  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:37.589836  489898 retry.go:31] will retry after 2.151742617s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:37.639147  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:20:37.694173  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:37.694209  489898 retry.go:31] will retry after 2.414165218s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:20:38.916349  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:20:39.741973  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1014 20:20:39.798810  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:39.798851  489898 retry.go:31] will retry after 6.380239181s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:40.109367  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:20:40.165446  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:40.165488  489898 retry.go:31] will retry after 4.273629229s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:20:40.916572  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:20:43.416160  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:20:44.439617  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:20:44.495805  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:44.495847  489898 retry.go:31] will retry after 5.884728712s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:20:45.916420  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:20:46.179913  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1014 20:20:46.236772  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:46.236810  489898 retry.go:31] will retry after 6.359293031s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:20:48.416258  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:20:50.381581  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:20:50.416856  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:20:50.439004  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:50.439036  489898 retry.go:31] will retry after 11.771270745s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:52.597189  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1014 20:20:52.652445  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:52.652476  489898 retry.go:31] will retry after 10.720617277s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:20:52.916399  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:20:54.916966  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:20:57.416509  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:20:59.416864  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:21:01.916327  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:21:02.210789  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:21:02.266987  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:21:02.267021  489898 retry.go:31] will retry after 17.660934523s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:21:03.373440  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1014 20:21:03.428855  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:21:03.428886  489898 retry.go:31] will retry after 19.842704585s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:21:04.416008  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:21:06.416555  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:21:08.916547  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:21:11.416206  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:21:13.416608  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:21:15.916358  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:21:18.416349  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:21:19.929156  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:21:19.984438  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:21:19.984472  489898 retry.go:31] will retry after 17.500549438s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:21:20.416573  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:21:22.916397  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:21:23.271863  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1014 20:21:23.329260  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:21:23.329293  489898 retry.go:31] will retry after 15.097428161s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:21:24.916706  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:21:27.416721  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:21:29.916916  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:21:32.416582  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:21:34.916674  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:21:37.416493  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:21:37.485708  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:21:37.543070  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:21:37.543103  489898 retry.go:31] will retry after 40.949070497s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:21:38.427486  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1014 20:21:38.483097  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:21:38.483131  489898 retry.go:31] will retry after 43.966081483s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:21:39.916663  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:21:42.416063  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:21:44.416626  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:21:46.916599  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:21:49.416540  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:21:51.916813  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:21:54.416366  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:21:56.916158  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:21:58.916828  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:22:01.416335  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:22:03.916080  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:22:05.916821  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:22:08.416505  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:22:10.916302  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:22:13.416113  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:22:15.416873  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:22:17.915856  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:22:18.493252  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:22:18.550465  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:22:18.550634  489898 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1014 20:22:19.916885  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:22:22.415932  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:22:22.450166  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1014 20:22:22.505420  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:22:22.505542  489898 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1014 20:22:22.507884  489898 out.go:179] * Enabled addons: 
	I1014 20:22:22.509381  489898 addons.go:514] duration metric: took 1m50.724843787s for enable addons: enabled=[]
	W1014 20:22:24.416034  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:22:26.416184  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:22:28.416951  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:22:30.915926  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:22:32.916501  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:22:35.416234  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:22:37.916248  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:22:40.416105  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:22:42.416850  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:22:44.916913  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:22:47.416908  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:22:49.916150  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:22:52.416224  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:22:54.916288  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:22:57.416136  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:22:59.916282  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:23:02.416423  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:23:04.916846  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:23:07.416742  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:23:09.916646  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:23:12.416492  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:23:14.916627  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:23:17.416573  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:23:19.916979  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:23:22.416907  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:23:24.916676  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:23:27.416100  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:23:29.416688  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:23:31.916476  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:23:33.916684  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:23:36.415978  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:23:38.416324  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:23:40.916275  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:23:43.416374  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:23:45.916584  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:23:48.416514  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:23:50.916574  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:23:53.416488  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:23:55.916368  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:23:58.416228  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:00.916323  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:03.416241  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:05.916163  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:08.416132  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:10.416813  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:12.916784  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:15.416869  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:17.916799  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:20.416685  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:22.916800  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:25.416843  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:27.916316  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:29.916868  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:32.416204  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:34.416320  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:36.416834  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:38.916212  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:40.916807  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:43.416048  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:45.916074  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:47.916191  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:49.916569  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:52.415930  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:54.916217  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:57.415888  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:59.416062  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:01.416475  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:03.916023  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:05.916303  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:08.416300  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:10.416919  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:12.916244  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:14.916535  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:16.916873  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:19.416132  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:21.416381  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:23.915957  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:25.916320  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:28.416305  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:30.416865  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:32.916038  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:34.916413  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:37.416363  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:39.916738  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:42.416168  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:44.916517  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:47.416590  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:49.416883  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:51.916394  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:54.416277  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:56.416473  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:58.916319  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:26:01.415934  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:26:03.416196  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:26:05.416655  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:26:07.916910  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:26:10.416263  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:26:12.416368  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:26:14.916632  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:26:16.916923  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:26:19.415911  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:26:21.416220  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:26:23.416304  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:26:25.416816  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:26:27.916321  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:26:29.916832  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:26:31.915871  489898 node_ready.go:38] duration metric: took 6m0.000553348s for node "ha-579393" to be "Ready" ...
	I1014 20:26:31.918599  489898 out.go:203] 
	W1014 20:26:31.920009  489898 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1014 20:26:31.920031  489898 out.go:285] * 
	W1014 20:26:31.921790  489898 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 20:26:31.923205  489898 out.go:203] 
	
	
	==> CRI-O <==
	Oct 14 20:26:23 ha-579393 crio[522]: time="2025-10-14T20:26:23.206804664Z" level=info msg="createCtr: removing container 9d48a70113549e3f7d42603bc93ab58215496d1a2be0313bfda784ca93c63048" id=e8033ccc-5e2e-46a7-b2da-b366e6639b16 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:26:23 ha-579393 crio[522]: time="2025-10-14T20:26:23.206850624Z" level=info msg="createCtr: deleting container 9d48a70113549e3f7d42603bc93ab58215496d1a2be0313bfda784ca93c63048 from storage" id=e8033ccc-5e2e-46a7-b2da-b366e6639b16 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:26:23 ha-579393 crio[522]: time="2025-10-14T20:26:23.209012681Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-579393_kube-system_514451ea1eb9e52e24cc36daace2ea4a_0" id=e8033ccc-5e2e-46a7-b2da-b366e6639b16 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:26:30 ha-579393 crio[522]: time="2025-10-14T20:26:30.183633141Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=167c86e5-60b3-47f2-b9b8-98db022ed999 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 20:26:30 ha-579393 crio[522]: time="2025-10-14T20:26:30.184723744Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=892d8b85-81c2-4b15-90ff-ef55e352edc6 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 20:26:30 ha-579393 crio[522]: time="2025-10-14T20:26:30.185932607Z" level=info msg="Creating container: kube-system/kube-apiserver-ha-579393/kube-apiserver" id=d2ebc873-5f97-487a-b032-921c5db7d287 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:26:30 ha-579393 crio[522]: time="2025-10-14T20:26:30.186213766Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:26:30 ha-579393 crio[522]: time="2025-10-14T20:26:30.190737476Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:26:30 ha-579393 crio[522]: time="2025-10-14T20:26:30.191229955Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:26:30 ha-579393 crio[522]: time="2025-10-14T20:26:30.207102111Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=d2ebc873-5f97-487a-b032-921c5db7d287 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:26:30 ha-579393 crio[522]: time="2025-10-14T20:26:30.208579169Z" level=info msg="createCtr: deleting container ID 688827a153cab44b50f97bc346359c3a2feba1bdd0b8a7d0d006066ccf422f53 from idIndex" id=d2ebc873-5f97-487a-b032-921c5db7d287 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:26:30 ha-579393 crio[522]: time="2025-10-14T20:26:30.208619796Z" level=info msg="createCtr: removing container 688827a153cab44b50f97bc346359c3a2feba1bdd0b8a7d0d006066ccf422f53" id=d2ebc873-5f97-487a-b032-921c5db7d287 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:26:30 ha-579393 crio[522]: time="2025-10-14T20:26:30.208664794Z" level=info msg="createCtr: deleting container 688827a153cab44b50f97bc346359c3a2feba1bdd0b8a7d0d006066ccf422f53 from storage" id=d2ebc873-5f97-487a-b032-921c5db7d287 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:26:30 ha-579393 crio[522]: time="2025-10-14T20:26:30.210726815Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-ha-579393_kube-system_06869f9c490a5fffb50940ac23939d18_0" id=d2ebc873-5f97-487a-b032-921c5db7d287 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:26:33 ha-579393 crio[522]: time="2025-10-14T20:26:33.182842612Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=78b11ab8-2c8f-4d70-b7bb-fbb373bcdebf name=/runtime.v1.ImageService/ImageStatus
	Oct 14 20:26:33 ha-579393 crio[522]: time="2025-10-14T20:26:33.183846581Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=85c6a0a3-0b4a-49b9-862c-b1ca332e0177 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 20:26:33 ha-579393 crio[522]: time="2025-10-14T20:26:33.184979721Z" level=info msg="Creating container: kube-system/etcd-ha-579393/etcd" id=34d4f8c9-2d27-4309-adbc-68d2f3f86188 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:26:33 ha-579393 crio[522]: time="2025-10-14T20:26:33.185248366Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:26:33 ha-579393 crio[522]: time="2025-10-14T20:26:33.189135788Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:26:33 ha-579393 crio[522]: time="2025-10-14T20:26:33.189599457Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:26:33 ha-579393 crio[522]: time="2025-10-14T20:26:33.2075276Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=34d4f8c9-2d27-4309-adbc-68d2f3f86188 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:26:33 ha-579393 crio[522]: time="2025-10-14T20:26:33.209025368Z" level=info msg="createCtr: deleting container ID f31321508e090210c9b8eb47ae6d0873bfe8914d8927824eb395e1ee47950a72 from idIndex" id=34d4f8c9-2d27-4309-adbc-68d2f3f86188 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:26:33 ha-579393 crio[522]: time="2025-10-14T20:26:33.209068332Z" level=info msg="createCtr: removing container f31321508e090210c9b8eb47ae6d0873bfe8914d8927824eb395e1ee47950a72" id=34d4f8c9-2d27-4309-adbc-68d2f3f86188 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:26:33 ha-579393 crio[522]: time="2025-10-14T20:26:33.209114024Z" level=info msg="createCtr: deleting container f31321508e090210c9b8eb47ae6d0873bfe8914d8927824eb395e1ee47950a72 from storage" id=34d4f8c9-2d27-4309-adbc-68d2f3f86188 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:26:33 ha-579393 crio[522]: time="2025-10-14T20:26:33.211623874Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-579393_kube-system_949fee8892a6b2444a3aa0dec92a7837_0" id=34d4f8c9-2d27-4309-adbc-68d2f3f86188 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 20:26:36.140006    2338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:26:36.140493    2338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:26:36.142192    2338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:26:36.142803    2338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:26:36.144094    2338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 91 42 a8 46 f5 08 06
	[ +33.952077] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff aa 17 60 38 7c 63 08 06
	[  +0.961658] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ba 77 81 98 05 a1 08 06
	[  +0.043334] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 8a 2f 57 55 4c d7 08 06
	[  +4.681081] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f6 ac 51 a4 b2 5a 08 06
	[Oct14 19:04] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e6 d1 de e1 85 25 08 06
	[  +0.899784] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ae f2 fe 51 3d 95 08 06
	[  +0.040749] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 82 c6 36 89 b8 4a 08 06
	[  +4.162603] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c6 62 d5 5c 0e ef 08 06
	[ +30.801371] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 06 ae e5 28 c7 39 08 06
	[  +0.817897] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ae 5e 5e 90 62 1a 08 06
	[  +0.046959] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 96 8f 1b bd b9 9f 08 06
	[Oct14 19:05] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 56 59 c3 7c db c4 08 06
	
	
	==> kernel <==
	 20:26:36 up  3:09,  0 user,  load average: 0.16, 0.07, 0.25
	Linux ha-579393 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 14 20:26:26 ha-579393 kubelet[673]: E1014 20:26:26.823387     673 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-579393?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 14 20:26:26 ha-579393 kubelet[673]: I1014 20:26:26.999561     673 kubelet_node_status.go:75] "Attempting to register node" node="ha-579393"
	Oct 14 20:26:27 ha-579393 kubelet[673]: E1014 20:26:26.999973     673 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-579393"
	Oct 14 20:26:30 ha-579393 kubelet[673]: E1014 20:26:30.183164     673 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-579393\" not found" node="ha-579393"
	Oct 14 20:26:30 ha-579393 kubelet[673]: E1014 20:26:30.211092     673 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 14 20:26:30 ha-579393 kubelet[673]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 14 20:26:30 ha-579393 kubelet[673]:  > podSandboxID="ace7e840fe529dab46ef907372a1c92e141c023a259dd49f8023c7b15dcf1a62"
	Oct 14 20:26:30 ha-579393 kubelet[673]: E1014 20:26:30.211196     673 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 14 20:26:30 ha-579393 kubelet[673]:         container kube-apiserver start failed in pod kube-apiserver-ha-579393_kube-system(06869f9c490a5fffb50940ac23939d18): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 14 20:26:30 ha-579393 kubelet[673]:  > logger="UnhandledError"
	Oct 14 20:26:30 ha-579393 kubelet[673]: E1014 20:26:30.211232     673 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-ha-579393" podUID="06869f9c490a5fffb50940ac23939d18"
	Oct 14 20:26:31 ha-579393 kubelet[673]: E1014 20:26:31.198909     673 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-579393\" not found"
	Oct 14 20:26:33 ha-579393 kubelet[673]: E1014 20:26:33.182260     673 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-579393\" not found" node="ha-579393"
	Oct 14 20:26:33 ha-579393 kubelet[673]: E1014 20:26:33.211988     673 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 14 20:26:33 ha-579393 kubelet[673]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 14 20:26:33 ha-579393 kubelet[673]:  > podSandboxID="0ae3992141a445a5fc4b4a1c62c57009afcf5eb3d3627a888843e967b225ebc0"
	Oct 14 20:26:33 ha-579393 kubelet[673]: E1014 20:26:33.212111     673 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 14 20:26:33 ha-579393 kubelet[673]:         container etcd start failed in pod etcd-ha-579393_kube-system(949fee8892a6b2444a3aa0dec92a7837): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 14 20:26:33 ha-579393 kubelet[673]:  > logger="UnhandledError"
	Oct 14 20:26:33 ha-579393 kubelet[673]: E1014 20:26:33.212148     673 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-ha-579393" podUID="949fee8892a6b2444a3aa0dec92a7837"
	Oct 14 20:26:33 ha-579393 kubelet[673]: E1014 20:26:33.824245     673 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-579393?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 14 20:26:34 ha-579393 kubelet[673]: I1014 20:26:34.002375     673 kubelet_node_status.go:75] "Attempting to register node" node="ha-579393"
	Oct 14 20:26:34 ha-579393 kubelet[673]: E1014 20:26:34.002751     673 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-579393"
	Oct 14 20:26:34 ha-579393 kubelet[673]: E1014 20:26:34.448000     673 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	Oct 14 20:26:36 ha-579393 kubelet[673]: E1014 20:26:36.182819     673 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-579393\" not found" node="ha-579393"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-579393 -n ha-579393
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-579393 -n ha-579393: exit status 2 (305.281088ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "ha-579393" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (1.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:305: expected profile "ha-579393" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-579393\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-579393\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nf
sshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-579393\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonIm
ages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
ha_test.go:309: expected profile "ha-579393" in json of 'profile list' to have "HAppy" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-579393\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-579393\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSShar
esRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-579393\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\
"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --o
utput json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-579393
helpers_test.go:243: (dbg) docker inspect ha-579393:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2",
	        "Created": "2025-10-14T20:03:22.416166993Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 490098,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-14T20:20:25.14898301Z",
	            "FinishedAt": "2025-10-14T20:20:23.81110806Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2/hostname",
	        "HostsPath": "/var/lib/docker/containers/e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2/hosts",
	        "LogPath": "/var/lib/docker/containers/e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2/e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2-json.log",
	        "Name": "/ha-579393",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-579393:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-579393",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e999454f448316db190be6cc1b7d83e2da368d1d2c5c77fd538e33303b991db2",
	                "LowerDir": "/var/lib/docker/overlay2/f2862f7f555e5163c5fb738e59e646f87e5b6f6dc38621bbd5cf8a3ccf2dc40d-init/diff:/var/lib/docker/overlay2/51cca15559205c73df9571f03495ed8a29f085405673af5fef2c1ba7c695d8af/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f2862f7f555e5163c5fb738e59e646f87e5b6f6dc38621bbd5cf8a3ccf2dc40d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f2862f7f555e5163c5fb738e59e646f87e5b6f6dc38621bbd5cf8a3ccf2dc40d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f2862f7f555e5163c5fb738e59e646f87e5b6f6dc38621bbd5cf8a3ccf2dc40d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-579393",
	                "Source": "/var/lib/docker/volumes/ha-579393/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-579393",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-579393",
	                "name.minikube.sigs.k8s.io": "ha-579393",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c02e504f33501e83f3b9b4187e2f5d221a1e738b5d0f6faf24795ae2990234ba",
	            "SandboxKey": "/var/run/docker/netns/c02e504f3350",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32913"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32914"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32917"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32915"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32916"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-579393": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "42:54:39:d3:9a:de",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d686e445c44d566d468791372c74214f3cfc87adaf2fbedd68822ff7d3ce8955",
	                    "EndpointID": "aaae67063c867134e15c0950594bd5a6f0ea17d0626d59a73f395e78fd0d78e2",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-579393",
	                        "e999454f4483"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-579393 -n ha-579393
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-579393 -n ha-579393: exit status 2 (308.75242ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-579393 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                             ARGS                                             │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ kubectl │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:11 UTC │                     │
	│ kubectl │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:12 UTC │                     │
	│ kubectl │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:12 UTC │                     │
	│ kubectl │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:12 UTC │                     │
	│ kubectl │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ kubectl │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                        │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ kubectl │ ha-579393 kubectl -- exec  -- nslookup kubernetes.io                                         │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ kubectl │ ha-579393 kubectl -- exec  -- nslookup kubernetes.default                                    │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ kubectl │ ha-579393 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                  │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ kubectl │ ha-579393 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                        │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ node    │ ha-579393 node add --alsologtostderr -v 5                                                    │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ node    │ ha-579393 node stop m02 --alsologtostderr -v 5                                               │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ node    │ ha-579393 node start m02 --alsologtostderr -v 5                                              │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:13 UTC │                     │
	│ node    │ ha-579393 node list --alsologtostderr -v 5                                                   │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:14 UTC │                     │
	│ stop    │ ha-579393 stop --alsologtostderr -v 5                                                        │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:14 UTC │ 14 Oct 25 20:14 UTC │
	│ start   │ ha-579393 start --wait true --alsologtostderr -v 5                                           │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:14 UTC │                     │
	│ node    │ ha-579393 node list --alsologtostderr -v 5                                                   │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:20 UTC │                     │
	│ node    │ ha-579393 node delete m03 --alsologtostderr -v 5                                             │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:20 UTC │                     │
	│ stop    │ ha-579393 stop --alsologtostderr -v 5                                                        │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:20 UTC │ 14 Oct 25 20:20 UTC │
	│ start   │ ha-579393 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:20 UTC │                     │
	│ node    │ ha-579393 node add --control-plane --alsologtostderr -v 5                                    │ ha-579393 │ jenkins │ v1.37.0 │ 14 Oct 25 20:26 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/14 20:20:24
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1014 20:20:24.896990  489898 out.go:360] Setting OutFile to fd 1 ...
	I1014 20:20:24.897284  489898 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:20:24.897296  489898 out.go:374] Setting ErrFile to fd 2...
	I1014 20:20:24.897302  489898 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:20:24.897542  489898 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-413763/.minikube/bin
	I1014 20:20:24.898101  489898 out.go:368] Setting JSON to false
	I1014 20:20:24.899124  489898 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":10971,"bootTime":1760462254,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1014 20:20:24.899246  489898 start.go:141] virtualization: kvm guest
	I1014 20:20:24.901656  489898 out.go:179] * [ha-579393] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1014 20:20:24.903433  489898 notify.go:220] Checking for updates...
	I1014 20:20:24.903468  489898 out.go:179]   - MINIKUBE_LOCATION=21409
	I1014 20:20:24.905313  489898 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 20:20:24.907144  489898 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-413763/kubeconfig
	I1014 20:20:24.908684  489898 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-413763/.minikube
	I1014 20:20:24.910248  489898 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1014 20:20:24.911693  489898 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 20:20:24.913427  489898 config.go:182] Loaded profile config "ha-579393": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:20:24.913995  489898 driver.go:421] Setting default libvirt URI to qemu:///system
	I1014 20:20:24.938824  489898 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1014 20:20:24.939019  489898 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 20:20:25.007228  489898 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-14 20:20:24.996574045 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1014 20:20:25.007350  489898 docker.go:318] overlay module found
	I1014 20:20:25.010286  489898 out.go:179] * Using the docker driver based on existing profile
	I1014 20:20:25.011792  489898 start.go:305] selected driver: docker
	I1014 20:20:25.011819  489898 start.go:925] validating driver "docker" against &{Name:ha-579393 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-579393 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 20:20:25.011930  489898 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 20:20:25.012031  489898 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 20:20:25.072380  489898 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-14 20:20:25.062155128 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1014 20:20:25.073231  489898 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 20:20:25.073267  489898 cni.go:84] Creating CNI manager for ""
	I1014 20:20:25.073308  489898 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1014 20:20:25.073364  489898 start.go:349] cluster config:
	{Name:ha-579393 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-579393 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GP
Us: AutoPauseInterval:1m0s}
	I1014 20:20:25.075991  489898 out.go:179] * Starting "ha-579393" primary control-plane node in "ha-579393" cluster
	I1014 20:20:25.077302  489898 cache.go:123] Beginning downloading kic base image for docker with crio
	I1014 20:20:25.079216  489898 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1014 20:20:25.080637  489898 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 20:20:25.080691  489898 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-413763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1014 20:20:25.080700  489898 cache.go:58] Caching tarball of preloaded images
	I1014 20:20:25.080769  489898 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1014 20:20:25.080800  489898 preload.go:233] Found /home/jenkins/minikube-integration/21409-413763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1014 20:20:25.080809  489898 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1014 20:20:25.080900  489898 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/config.json ...
	I1014 20:20:25.101692  489898 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1014 20:20:25.101730  489898 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1014 20:20:25.101767  489898 cache.go:232] Successfully downloaded all kic artifacts
	I1014 20:20:25.101806  489898 start.go:360] acquireMachinesLock for ha-579393: {Name:mk1f05a584fb3418ecbc78031a467c0e06c7a892 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 20:20:25.101915  489898 start.go:364] duration metric: took 60.146µs to acquireMachinesLock for "ha-579393"
	I1014 20:20:25.101941  489898 start.go:96] Skipping create...Using existing machine configuration
	I1014 20:20:25.101949  489898 fix.go:54] fixHost starting: 
	I1014 20:20:25.102193  489898 cli_runner.go:164] Run: docker container inspect ha-579393 --format={{.State.Status}}
	I1014 20:20:25.120107  489898 fix.go:112] recreateIfNeeded on ha-579393: state=Stopped err=<nil>
	W1014 20:20:25.120156  489898 fix.go:138] unexpected machine state, will restart: <nil>
	I1014 20:20:25.122045  489898 out.go:252] * Restarting existing docker container for "ha-579393" ...
	I1014 20:20:25.122122  489898 cli_runner.go:164] Run: docker start ha-579393
	I1014 20:20:25.362928  489898 cli_runner.go:164] Run: docker container inspect ha-579393 --format={{.State.Status}}
	I1014 20:20:25.381546  489898 kic.go:430] container "ha-579393" state is running.
	I1014 20:20:25.382015  489898 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-579393
	I1014 20:20:25.401922  489898 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/config.json ...
	I1014 20:20:25.402188  489898 machine.go:93] provisionDockerMachine start ...
	I1014 20:20:25.402264  489898 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:20:25.421465  489898 main.go:141] libmachine: Using SSH client type: native
	I1014 20:20:25.421725  489898 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32913 <nil> <nil>}
	I1014 20:20:25.421746  489898 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 20:20:25.422396  489898 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40524->127.0.0.1:32913: read: connection reset by peer
	I1014 20:20:28.574789  489898 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-579393
	
	I1014 20:20:28.574824  489898 ubuntu.go:182] provisioning hostname "ha-579393"
	I1014 20:20:28.574892  489898 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:20:28.593311  489898 main.go:141] libmachine: Using SSH client type: native
	I1014 20:20:28.593527  489898 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32913 <nil> <nil>}
	I1014 20:20:28.593539  489898 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-579393 && echo "ha-579393" | sudo tee /etc/hostname
	I1014 20:20:28.751227  489898 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-579393
	
	I1014 20:20:28.751331  489898 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:20:28.771390  489898 main.go:141] libmachine: Using SSH client type: native
	I1014 20:20:28.771598  489898 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32913 <nil> <nil>}
	I1014 20:20:28.771614  489898 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-579393' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-579393/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-579393' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 20:20:28.919191  489898 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 20:20:28.919232  489898 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-413763/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-413763/.minikube}
	I1014 20:20:28.919288  489898 ubuntu.go:190] setting up certificates
	I1014 20:20:28.919304  489898 provision.go:84] configureAuth start
	I1014 20:20:28.919374  489898 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-579393
	I1014 20:20:28.937555  489898 provision.go:143] copyHostCerts
	I1014 20:20:28.937600  489898 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem
	I1014 20:20:28.937642  489898 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem, removing ...
	I1014 20:20:28.937656  489898 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem
	I1014 20:20:28.937748  489898 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem (1078 bytes)
	I1014 20:20:28.938006  489898 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem
	I1014 20:20:28.938042  489898 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem, removing ...
	I1014 20:20:28.938054  489898 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem
	I1014 20:20:28.938107  489898 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem (1123 bytes)
	I1014 20:20:28.938179  489898 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem
	I1014 20:20:28.938205  489898 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem, removing ...
	I1014 20:20:28.938214  489898 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem
	I1014 20:20:28.938254  489898 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem (1675 bytes)
	I1014 20:20:28.938327  489898 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca-key.pem org=jenkins.ha-579393 san=[127.0.0.1 192.168.49.2 ha-579393 localhost minikube]
	I1014 20:20:28.984141  489898 provision.go:177] copyRemoteCerts
	I1014 20:20:28.984206  489898 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 20:20:28.984251  489898 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:20:29.002844  489898 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:20:29.106575  489898 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1014 20:20:29.106640  489898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1014 20:20:29.125278  489898 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1014 20:20:29.125393  489898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 20:20:29.144458  489898 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1014 20:20:29.144532  489898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1014 20:20:29.163285  489898 provision.go:87] duration metric: took 243.963585ms to configureAuth
	I1014 20:20:29.163319  489898 ubuntu.go:206] setting minikube options for container-runtime
	I1014 20:20:29.163543  489898 config.go:182] Loaded profile config "ha-579393": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:20:29.163679  489898 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:20:29.182115  489898 main.go:141] libmachine: Using SSH client type: native
	I1014 20:20:29.182329  489898 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32913 <nil> <nil>}
	I1014 20:20:29.182344  489898 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 20:20:29.446950  489898 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 20:20:29.446980  489898 machine.go:96] duration metric: took 4.044773675s to provisionDockerMachine
	I1014 20:20:29.446995  489898 start.go:293] postStartSetup for "ha-579393" (driver="docker")
	I1014 20:20:29.447007  489898 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 20:20:29.447058  489898 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 20:20:29.447097  489898 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:20:29.465397  489898 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:20:29.570738  489898 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 20:20:29.574668  489898 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1014 20:20:29.574700  489898 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1014 20:20:29.574712  489898 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-413763/.minikube/addons for local assets ...
	I1014 20:20:29.574793  489898 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-413763/.minikube/files for local assets ...
	I1014 20:20:29.574907  489898 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem -> 4173732.pem in /etc/ssl/certs
	I1014 20:20:29.574923  489898 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem -> /etc/ssl/certs/4173732.pem
	I1014 20:20:29.575031  489898 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 20:20:29.583269  489898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem --> /etc/ssl/certs/4173732.pem (1708 bytes)
	I1014 20:20:29.600614  489898 start.go:296] duration metric: took 153.60445ms for postStartSetup
	I1014 20:20:29.600725  489898 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1014 20:20:29.600803  489898 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:20:29.618422  489898 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:20:29.719349  489898 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1014 20:20:29.724065  489898 fix.go:56] duration metric: took 4.622108754s for fixHost
	I1014 20:20:29.724091  489898 start.go:83] releasing machines lock for "ha-579393", held for 4.622163128s
	I1014 20:20:29.724158  489898 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-579393
	I1014 20:20:29.742263  489898 ssh_runner.go:195] Run: cat /version.json
	I1014 20:20:29.742292  489898 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 20:20:29.742312  489898 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:20:29.742360  489898 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:20:29.760806  489898 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:20:29.761892  489898 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:20:29.861522  489898 ssh_runner.go:195] Run: systemctl --version
	I1014 20:20:29.920675  489898 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 20:20:29.958043  489898 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 20:20:29.963013  489898 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 20:20:29.963081  489898 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 20:20:29.971684  489898 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1014 20:20:29.971715  489898 start.go:495] detecting cgroup driver to use...
	I1014 20:20:29.971777  489898 detect.go:190] detected "systemd" cgroup driver on host os
	I1014 20:20:29.971827  489898 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 20:20:29.986651  489898 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 20:20:29.999493  489898 docker.go:218] disabling cri-docker service (if available) ...
	I1014 20:20:29.999555  489898 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 20:20:30.014987  489898 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 20:20:30.028206  489898 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 20:20:30.108561  489898 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 20:20:30.189013  489898 docker.go:234] disabling docker service ...
	I1014 20:20:30.189092  489898 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 20:20:30.205263  489898 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 20:20:30.218011  489898 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 20:20:30.297456  489898 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 20:20:30.378372  489898 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 20:20:30.391541  489898 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 20:20:30.406068  489898 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1014 20:20:30.406139  489898 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:20:30.415378  489898 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1014 20:20:30.415458  489898 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:20:30.425041  489898 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:20:30.434283  489898 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:20:30.443270  489898 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 20:20:30.451367  489898 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:20:30.460460  489898 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:20:30.469171  489898 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:20:30.478459  489898 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 20:20:30.486229  489898 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 20:20:30.493996  489898 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 20:20:30.573307  489898 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 20:20:30.683147  489898 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 20:20:30.683209  489898 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 20:20:30.687341  489898 start.go:563] Will wait 60s for crictl version
	I1014 20:20:30.687394  489898 ssh_runner.go:195] Run: which crictl
	I1014 20:20:30.690908  489898 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1014 20:20:30.716598  489898 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1014 20:20:30.716668  489898 ssh_runner.go:195] Run: crio --version
	I1014 20:20:30.746004  489898 ssh_runner.go:195] Run: crio --version
	I1014 20:20:30.777705  489898 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1014 20:20:30.778957  489898 cli_runner.go:164] Run: docker network inspect ha-579393 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1014 20:20:30.796976  489898 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1014 20:20:30.801378  489898 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 20:20:30.812065  489898 kubeadm.go:883] updating cluster {Name:ha-579393 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-579393 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClien
tPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 20:20:30.812194  489898 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 20:20:30.812256  489898 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 20:20:30.843803  489898 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 20:20:30.843825  489898 crio.go:433] Images already preloaded, skipping extraction
	I1014 20:20:30.843871  489898 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 20:20:30.870297  489898 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 20:20:30.870318  489898 cache_images.go:85] Images are preloaded, skipping loading
	I1014 20:20:30.870326  489898 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1014 20:20:30.870413  489898 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-579393 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-579393 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 20:20:30.870472  489898 ssh_runner.go:195] Run: crio config
	I1014 20:20:30.916212  489898 cni.go:84] Creating CNI manager for ""
	I1014 20:20:30.916239  489898 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1014 20:20:30.916269  489898 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1014 20:20:30.916293  489898 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-579393 NodeName:ha-579393 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1014 20:20:30.916410  489898 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-579393"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 20:20:30.916472  489898 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1014 20:20:30.925261  489898 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 20:20:30.925338  489898 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1014 20:20:30.933417  489898 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1014 20:20:30.946346  489898 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 20:20:30.959345  489898 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1014 20:20:30.972547  489898 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1014 20:20:30.976536  489898 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 20:20:30.987410  489898 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 20:20:31.067104  489898 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 20:20:31.089513  489898 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393 for IP: 192.168.49.2
	I1014 20:20:31.089537  489898 certs.go:195] generating shared ca certs ...
	I1014 20:20:31.089557  489898 certs.go:227] acquiring lock for ca certs: {Name:mk332cc83f1e1747534fc81e896bed94f24941d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:20:31.089728  489898 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.key
	I1014 20:20:31.089804  489898 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.key
	I1014 20:20:31.089820  489898 certs.go:257] generating profile certs ...
	I1014 20:20:31.089945  489898 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/client.key
	I1014 20:20:31.090021  489898 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key.d4ee92c1
	I1014 20:20:31.090072  489898 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.key
	I1014 20:20:31.090088  489898 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1014 20:20:31.090106  489898 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1014 20:20:31.090118  489898 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1014 20:20:31.090131  489898 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1014 20:20:31.090142  489898 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1014 20:20:31.090156  489898 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1014 20:20:31.090168  489898 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1014 20:20:31.090182  489898 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1014 20:20:31.090241  489898 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/417373.pem (1338 bytes)
	W1014 20:20:31.090277  489898 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-413763/.minikube/certs/417373_empty.pem, impossibly tiny 0 bytes
	I1014 20:20:31.090288  489898 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca-key.pem (1675 bytes)
	I1014 20:20:31.090313  489898 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem (1078 bytes)
	I1014 20:20:31.090343  489898 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem (1123 bytes)
	I1014 20:20:31.090372  489898 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem (1675 bytes)
	I1014 20:20:31.090421  489898 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem (1708 bytes)
	I1014 20:20:31.090453  489898 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem -> /usr/share/ca-certificates/4173732.pem
	I1014 20:20:31.090470  489898 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:20:31.090487  489898 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/417373.pem -> /usr/share/ca-certificates/417373.pem
	I1014 20:20:31.091297  489898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 20:20:31.111369  489898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1014 20:20:31.131215  489898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 20:20:31.152691  489898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1014 20:20:31.177685  489898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1014 20:20:31.197344  489898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1014 20:20:31.216500  489898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 20:20:31.234564  489898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1014 20:20:31.252166  489898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem --> /usr/share/ca-certificates/4173732.pem (1708 bytes)
	I1014 20:20:31.269606  489898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 20:20:31.288166  489898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/certs/417373.pem --> /usr/share/ca-certificates/417373.pem (1338 bytes)
	I1014 20:20:31.305894  489898 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 20:20:31.318425  489898 ssh_runner.go:195] Run: openssl version
	I1014 20:20:31.324791  489898 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4173732.pem && ln -fs /usr/share/ca-certificates/4173732.pem /etc/ssl/certs/4173732.pem"
	I1014 20:20:31.333410  489898 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4173732.pem
	I1014 20:20:31.337628  489898 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 19:32 /usr/share/ca-certificates/4173732.pem
	I1014 20:20:31.337704  489898 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4173732.pem
	I1014 20:20:31.372321  489898 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4173732.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 20:20:31.381116  489898 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 20:20:31.390138  489898 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:20:31.394052  489898 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 19:15 /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:20:31.394109  489898 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:20:31.429938  489898 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 20:20:31.438655  489898 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/417373.pem && ln -fs /usr/share/ca-certificates/417373.pem /etc/ssl/certs/417373.pem"
	I1014 20:20:31.447298  489898 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/417373.pem
	I1014 20:20:31.451279  489898 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 19:32 /usr/share/ca-certificates/417373.pem
	I1014 20:20:31.451343  489898 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/417373.pem
	I1014 20:20:31.485062  489898 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/417373.pem /etc/ssl/certs/51391683.0"
	I1014 20:20:31.493976  489898 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 20:20:31.498163  489898 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1014 20:20:31.532437  489898 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1014 20:20:31.569216  489898 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1014 20:20:31.605892  489898 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1014 20:20:31.653534  489898 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1014 20:20:31.690955  489898 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1014 20:20:31.725979  489898 kubeadm.go:400] StartCluster: {Name:ha-579393 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-579393 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPa
th: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 20:20:31.726143  489898 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 20:20:31.726202  489898 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 20:20:31.755641  489898 cri.go:89] found id: ""
	I1014 20:20:31.755728  489898 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 20:20:31.764571  489898 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1014 20:20:31.764596  489898 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1014 20:20:31.764641  489898 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1014 20:20:31.772544  489898 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1014 20:20:31.772997  489898 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-579393" does not appear in /home/jenkins/minikube-integration/21409-413763/kubeconfig
	I1014 20:20:31.773109  489898 kubeconfig.go:62] /home/jenkins/minikube-integration/21409-413763/kubeconfig needs updating (will repair): [kubeconfig missing "ha-579393" cluster setting kubeconfig missing "ha-579393" context setting]
	I1014 20:20:31.773353  489898 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/kubeconfig: {Name:mk8f52e50cf00a1f90024c40a36e49753857e33b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:20:31.773843  489898 kapi.go:59] client config for ha-579393: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/client.crt", KeyFile:"/home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/client.key", CAFile:"/home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1014 20:20:31.774269  489898 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1014 20:20:31.774283  489898 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1014 20:20:31.774287  489898 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1014 20:20:31.774291  489898 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1014 20:20:31.774297  489898 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1014 20:20:31.774312  489898 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1014 20:20:31.774673  489898 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1014 20:20:31.783543  489898 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1014 20:20:31.783582  489898 kubeadm.go:601] duration metric: took 18.979903ms to restartPrimaryControlPlane
	I1014 20:20:31.783595  489898 kubeadm.go:402] duration metric: took 57.628352ms to StartCluster
	I1014 20:20:31.783616  489898 settings.go:142] acquiring lock: {Name:mke99e63954bc0385c76f9fa1a80091fa7740a6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:20:31.783711  489898 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21409-413763/kubeconfig
	I1014 20:20:31.784245  489898 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/kubeconfig: {Name:mk8f52e50cf00a1f90024c40a36e49753857e33b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:20:31.784483  489898 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 20:20:31.784537  489898 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1014 20:20:31.784634  489898 addons.go:69] Setting storage-provisioner=true in profile "ha-579393"
	I1014 20:20:31.784650  489898 addons.go:69] Setting default-storageclass=true in profile "ha-579393"
	I1014 20:20:31.784678  489898 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-579393"
	I1014 20:20:31.784687  489898 config.go:182] Loaded profile config "ha-579393": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:20:31.784656  489898 addons.go:238] Setting addon storage-provisioner=true in "ha-579393"
	I1014 20:20:31.784839  489898 host.go:66] Checking if "ha-579393" exists ...
	I1014 20:20:31.784988  489898 cli_runner.go:164] Run: docker container inspect ha-579393 --format={{.State.Status}}
	I1014 20:20:31.785316  489898 cli_runner.go:164] Run: docker container inspect ha-579393 --format={{.State.Status}}
	I1014 20:20:31.789929  489898 out.go:179] * Verifying Kubernetes components...
	I1014 20:20:31.791591  489898 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 20:20:31.805965  489898 kapi.go:59] client config for ha-579393: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/client.crt", KeyFile:"/home/jenkins/minikube-integration/21409-413763/.minikube/profiles/ha-579393/client.key", CAFile:"/home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1014 20:20:31.806386  489898 addons.go:238] Setting addon default-storageclass=true in "ha-579393"
	I1014 20:20:31.806441  489898 host.go:66] Checking if "ha-579393" exists ...
	I1014 20:20:31.806931  489898 cli_runner.go:164] Run: docker container inspect ha-579393 --format={{.State.Status}}
	I1014 20:20:31.807584  489898 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 20:20:31.809119  489898 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 20:20:31.809148  489898 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1014 20:20:31.809214  489898 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:20:31.832877  489898 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1014 20:20:31.832915  489898 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1014 20:20:31.832999  489898 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-579393
	I1014 20:20:31.836985  489898 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:20:31.854396  489898 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/ha-579393/id_rsa Username:docker}
	I1014 20:20:31.900722  489898 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 20:20:31.915259  489898 node_ready.go:35] waiting up to 6m0s for node "ha-579393" to be "Ready" ...
	I1014 20:20:31.948248  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 20:20:31.965203  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:20:32.010301  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:32.010345  489898 retry.go:31] will retry after 180.735659ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:20:32.026606  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:32.026655  489898 retry.go:31] will retry after 185.14299ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:32.191908  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 20:20:32.212727  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:20:32.261347  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:32.261379  489898 retry.go:31] will retry after 400.487372ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:20:32.273847  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:32.273879  489898 retry.go:31] will retry after 332.539123ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:32.606897  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:20:32.660842  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:32.660884  489898 retry.go:31] will retry after 506.115799ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:32.662966  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1014 20:20:32.717555  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:32.717593  489898 retry.go:31] will retry after 698.279488ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:33.167777  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:20:33.223185  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:33.223218  489898 retry.go:31] will retry after 929.627856ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:33.416016  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1014 20:20:33.471972  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:33.472005  489898 retry.go:31] will retry after 760.905339ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:20:33.916053  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:20:34.153507  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:20:34.208070  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:34.208106  489898 retry.go:31] will retry after 1.612829525s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:34.233328  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1014 20:20:34.287658  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:34.287702  489898 retry.go:31] will retry after 818.99186ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:35.107035  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1014 20:20:35.161369  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:35.161406  489898 retry.go:31] will retry after 2.372177473s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:35.821805  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:20:35.876422  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:35.876462  489898 retry.go:31] will retry after 1.76203735s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:20:36.416224  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:20:37.533877  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1014 20:20:37.589802  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:37.589836  489898 retry.go:31] will retry after 2.151742617s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:37.639147  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:20:37.694173  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:37.694209  489898 retry.go:31] will retry after 2.414165218s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:20:38.916349  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:20:39.741973  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1014 20:20:39.798810  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:39.798851  489898 retry.go:31] will retry after 6.380239181s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:40.109367  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:20:40.165446  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:40.165488  489898 retry.go:31] will retry after 4.273629229s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:20:40.916572  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:20:43.416160  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:20:44.439617  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:20:44.495805  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:44.495847  489898 retry.go:31] will retry after 5.884728712s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:20:45.916420  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:20:46.179913  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1014 20:20:46.236772  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:46.236810  489898 retry.go:31] will retry after 6.359293031s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:20:48.416258  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:20:50.381581  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:20:50.416856  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:20:50.439004  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:50.439036  489898 retry.go:31] will retry after 11.771270745s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:52.597189  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1014 20:20:52.652445  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:20:52.652476  489898 retry.go:31] will retry after 10.720617277s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:20:52.916399  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:20:54.916966  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:20:57.416509  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:20:59.416864  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:21:01.916327  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:21:02.210789  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:21:02.266987  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:21:02.267021  489898 retry.go:31] will retry after 17.660934523s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:21:03.373440  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1014 20:21:03.428855  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:21:03.428886  489898 retry.go:31] will retry after 19.842704585s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:21:04.416008  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:21:06.416555  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:21:08.916547  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:21:11.416206  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:21:13.416608  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:21:15.916358  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:21:18.416349  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:21:19.929156  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:21:19.984438  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:21:19.984472  489898 retry.go:31] will retry after 17.500549438s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:21:20.416573  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:21:22.916397  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:21:23.271863  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1014 20:21:23.329260  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:21:23.329293  489898 retry.go:31] will retry after 15.097428161s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:21:24.916706  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:21:27.416721  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:21:29.916916  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:21:32.416582  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:21:34.916674  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:21:37.416493  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:21:37.485708  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:21:37.543070  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:21:37.543103  489898 retry.go:31] will retry after 40.949070497s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:21:38.427486  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1014 20:21:38.483097  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 20:21:38.483131  489898 retry.go:31] will retry after 43.966081483s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:21:39.916663  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:21:42.416063  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:21:44.416626  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:21:46.916599  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:21:49.416540  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:21:51.916813  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:21:54.416366  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:21:56.916158  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:21:58.916828  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:22:01.416335  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:22:03.916080  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:22:05.916821  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:22:08.416505  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:22:10.916302  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:22:13.416113  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:22:15.416873  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:22:17.915856  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:22:18.493252  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1014 20:22:18.550465  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:22:18.550634  489898 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1014 20:22:19.916885  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:22:22.415932  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:22:22.450166  489898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1014 20:22:22.505420  489898 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1014 20:22:22.505542  489898 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1014 20:22:22.507884  489898 out.go:179] * Enabled addons: 
	I1014 20:22:22.509381  489898 addons.go:514] duration metric: took 1m50.724843787s for enable addons: enabled=[]
	W1014 20:22:24.416034  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:22:26.416184  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:22:28.416951  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:22:30.915926  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:22:32.916501  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:22:35.416234  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:22:37.916248  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:22:40.416105  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:22:42.416850  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:22:44.916913  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:22:47.416908  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:22:49.916150  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:22:52.416224  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:22:54.916288  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:22:57.416136  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:22:59.916282  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:23:02.416423  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:23:04.916846  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:23:07.416742  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:23:09.916646  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:23:12.416492  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:23:14.916627  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:23:17.416573  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:23:19.916979  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:23:22.416907  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:23:24.916676  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:23:27.416100  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:23:29.416688  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:23:31.916476  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:23:33.916684  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:23:36.415978  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:23:38.416324  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:23:40.916275  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:23:43.416374  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:23:45.916584  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:23:48.416514  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:23:50.916574  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:23:53.416488  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:23:55.916368  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:23:58.416228  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:00.916323  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:03.416241  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:05.916163  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:08.416132  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:10.416813  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:12.916784  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:15.416869  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:17.916799  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:20.416685  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:22.916800  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:25.416843  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:27.916316  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:29.916868  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:32.416204  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:34.416320  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:36.416834  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:38.916212  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:40.916807  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:43.416048  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:45.916074  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:47.916191  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:49.916569  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:52.415930  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:54.916217  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:57.415888  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:24:59.416062  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:01.416475  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:03.916023  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:05.916303  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:08.416300  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:10.416919  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:12.916244  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:14.916535  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:16.916873  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:19.416132  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:21.416381  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:23.915957  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:25.916320  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:28.416305  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:30.416865  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:32.916038  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:34.916413  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:37.416363  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:39.916738  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:42.416168  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:44.916517  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:47.416590  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:49.416883  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:51.916394  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:54.416277  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:56.416473  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:25:58.916319  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:26:01.415934  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:26:03.416196  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:26:05.416655  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:26:07.916910  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:26:10.416263  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:26:12.416368  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:26:14.916632  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:26:16.916923  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:26:19.415911  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:26:21.416220  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:26:23.416304  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:26:25.416816  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:26:27.916321  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	W1014 20:26:29.916832  489898 node_ready.go:55] error getting node "ha-579393" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-579393": dial tcp 192.168.49.2:8443: connect: connection refused
	I1014 20:26:31.915871  489898 node_ready.go:38] duration metric: took 6m0.000553348s for node "ha-579393" to be "Ready" ...
	I1014 20:26:31.918599  489898 out.go:203] 
	W1014 20:26:31.920009  489898 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1014 20:26:31.920031  489898 out.go:285] * 
	W1014 20:26:31.921790  489898 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 20:26:31.923205  489898 out.go:203] 
	
	
	==> CRI-O <==
	Oct 14 20:26:30 ha-579393 crio[522]: time="2025-10-14T20:26:30.208619796Z" level=info msg="createCtr: removing container 688827a153cab44b50f97bc346359c3a2feba1bdd0b8a7d0d006066ccf422f53" id=d2ebc873-5f97-487a-b032-921c5db7d287 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:26:30 ha-579393 crio[522]: time="2025-10-14T20:26:30.208664794Z" level=info msg="createCtr: deleting container 688827a153cab44b50f97bc346359c3a2feba1bdd0b8a7d0d006066ccf422f53 from storage" id=d2ebc873-5f97-487a-b032-921c5db7d287 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:26:30 ha-579393 crio[522]: time="2025-10-14T20:26:30.210726815Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-ha-579393_kube-system_06869f9c490a5fffb50940ac23939d18_0" id=d2ebc873-5f97-487a-b032-921c5db7d287 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:26:33 ha-579393 crio[522]: time="2025-10-14T20:26:33.182842612Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=78b11ab8-2c8f-4d70-b7bb-fbb373bcdebf name=/runtime.v1.ImageService/ImageStatus
	Oct 14 20:26:33 ha-579393 crio[522]: time="2025-10-14T20:26:33.183846581Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=85c6a0a3-0b4a-49b9-862c-b1ca332e0177 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 20:26:33 ha-579393 crio[522]: time="2025-10-14T20:26:33.184979721Z" level=info msg="Creating container: kube-system/etcd-ha-579393/etcd" id=34d4f8c9-2d27-4309-adbc-68d2f3f86188 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:26:33 ha-579393 crio[522]: time="2025-10-14T20:26:33.185248366Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:26:33 ha-579393 crio[522]: time="2025-10-14T20:26:33.189135788Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:26:33 ha-579393 crio[522]: time="2025-10-14T20:26:33.189599457Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:26:33 ha-579393 crio[522]: time="2025-10-14T20:26:33.2075276Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=34d4f8c9-2d27-4309-adbc-68d2f3f86188 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:26:33 ha-579393 crio[522]: time="2025-10-14T20:26:33.209025368Z" level=info msg="createCtr: deleting container ID f31321508e090210c9b8eb47ae6d0873bfe8914d8927824eb395e1ee47950a72 from idIndex" id=34d4f8c9-2d27-4309-adbc-68d2f3f86188 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:26:33 ha-579393 crio[522]: time="2025-10-14T20:26:33.209068332Z" level=info msg="createCtr: removing container f31321508e090210c9b8eb47ae6d0873bfe8914d8927824eb395e1ee47950a72" id=34d4f8c9-2d27-4309-adbc-68d2f3f86188 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:26:33 ha-579393 crio[522]: time="2025-10-14T20:26:33.209114024Z" level=info msg="createCtr: deleting container f31321508e090210c9b8eb47ae6d0873bfe8914d8927824eb395e1ee47950a72 from storage" id=34d4f8c9-2d27-4309-adbc-68d2f3f86188 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:26:33 ha-579393 crio[522]: time="2025-10-14T20:26:33.211623874Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-579393_kube-system_949fee8892a6b2444a3aa0dec92a7837_0" id=34d4f8c9-2d27-4309-adbc-68d2f3f86188 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:26:36 ha-579393 crio[522]: time="2025-10-14T20:26:36.183388845Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=79d4dd74-9bd1-4122-89cd-651837b7f9fc name=/runtime.v1.ImageService/ImageStatus
	Oct 14 20:26:36 ha-579393 crio[522]: time="2025-10-14T20:26:36.184378416Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=82144c72-24e1-4d4a-bde1-7b95a04903ef name=/runtime.v1.ImageService/ImageStatus
	Oct 14 20:26:36 ha-579393 crio[522]: time="2025-10-14T20:26:36.18542004Z" level=info msg="Creating container: kube-system/kube-scheduler-ha-579393/kube-scheduler" id=2ac870e7-12aa-4804-99bc-3e24cdda587f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:26:36 ha-579393 crio[522]: time="2025-10-14T20:26:36.185676056Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:26:36 ha-579393 crio[522]: time="2025-10-14T20:26:36.190144102Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:26:36 ha-579393 crio[522]: time="2025-10-14T20:26:36.190564548Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:26:36 ha-579393 crio[522]: time="2025-10-14T20:26:36.2068281Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=2ac870e7-12aa-4804-99bc-3e24cdda587f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:26:36 ha-579393 crio[522]: time="2025-10-14T20:26:36.208396248Z" level=info msg="createCtr: deleting container ID d37f793527371ee5e68f65de465d0724ab039379a5527501445556143a0de5c4 from idIndex" id=2ac870e7-12aa-4804-99bc-3e24cdda587f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:26:36 ha-579393 crio[522]: time="2025-10-14T20:26:36.208435602Z" level=info msg="createCtr: removing container d37f793527371ee5e68f65de465d0724ab039379a5527501445556143a0de5c4" id=2ac870e7-12aa-4804-99bc-3e24cdda587f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:26:36 ha-579393 crio[522]: time="2025-10-14T20:26:36.208472183Z" level=info msg="createCtr: deleting container d37f793527371ee5e68f65de465d0724ab039379a5527501445556143a0de5c4 from storage" id=2ac870e7-12aa-4804-99bc-3e24cdda587f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:26:36 ha-579393 crio[522]: time="2025-10-14T20:26:36.210441451Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-ha-579393_kube-system_8c15ab9dd5834e64ae44874faddf585d_0" id=2ac870e7-12aa-4804-99bc-3e24cdda587f name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 20:26:37.795060    2515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:26:37.795639    2515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:26:37.797289    2515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:26:37.797726    2515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:26:37.798952    2515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 91 42 a8 46 f5 08 06
	[ +33.952077] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff aa 17 60 38 7c 63 08 06
	[  +0.961658] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ba 77 81 98 05 a1 08 06
	[  +0.043334] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 8a 2f 57 55 4c d7 08 06
	[  +4.681081] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f6 ac 51 a4 b2 5a 08 06
	[Oct14 19:04] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e6 d1 de e1 85 25 08 06
	[  +0.899784] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ae f2 fe 51 3d 95 08 06
	[  +0.040749] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 82 c6 36 89 b8 4a 08 06
	[  +4.162603] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c6 62 d5 5c 0e ef 08 06
	[ +30.801371] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 06 ae e5 28 c7 39 08 06
	[  +0.817897] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ae 5e 5e 90 62 1a 08 06
	[  +0.046959] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 96 8f 1b bd b9 9f 08 06
	[Oct14 19:05] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 56 59 c3 7c db c4 08 06
	
	
	==> kernel <==
	 20:26:37 up  3:09,  0 user,  load average: 0.31, 0.10, 0.26
	Linux ha-579393 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 14 20:26:30 ha-579393 kubelet[673]:         container kube-apiserver start failed in pod kube-apiserver-ha-579393_kube-system(06869f9c490a5fffb50940ac23939d18): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 14 20:26:30 ha-579393 kubelet[673]:  > logger="UnhandledError"
	Oct 14 20:26:30 ha-579393 kubelet[673]: E1014 20:26:30.211232     673 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-ha-579393" podUID="06869f9c490a5fffb50940ac23939d18"
	Oct 14 20:26:31 ha-579393 kubelet[673]: E1014 20:26:31.198909     673 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-579393\" not found"
	Oct 14 20:26:33 ha-579393 kubelet[673]: E1014 20:26:33.182260     673 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-579393\" not found" node="ha-579393"
	Oct 14 20:26:33 ha-579393 kubelet[673]: E1014 20:26:33.211988     673 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 14 20:26:33 ha-579393 kubelet[673]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 14 20:26:33 ha-579393 kubelet[673]:  > podSandboxID="0ae3992141a445a5fc4b4a1c62c57009afcf5eb3d3627a888843e967b225ebc0"
	Oct 14 20:26:33 ha-579393 kubelet[673]: E1014 20:26:33.212111     673 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 14 20:26:33 ha-579393 kubelet[673]:         container etcd start failed in pod etcd-ha-579393_kube-system(949fee8892a6b2444a3aa0dec92a7837): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 14 20:26:33 ha-579393 kubelet[673]:  > logger="UnhandledError"
	Oct 14 20:26:33 ha-579393 kubelet[673]: E1014 20:26:33.212148     673 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-ha-579393" podUID="949fee8892a6b2444a3aa0dec92a7837"
	Oct 14 20:26:33 ha-579393 kubelet[673]: E1014 20:26:33.824245     673 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-579393?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 14 20:26:34 ha-579393 kubelet[673]: I1014 20:26:34.002375     673 kubelet_node_status.go:75] "Attempting to register node" node="ha-579393"
	Oct 14 20:26:34 ha-579393 kubelet[673]: E1014 20:26:34.002751     673 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-579393"
	Oct 14 20:26:34 ha-579393 kubelet[673]: E1014 20:26:34.448000     673 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	Oct 14 20:26:36 ha-579393 kubelet[673]: E1014 20:26:36.182819     673 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-579393\" not found" node="ha-579393"
	Oct 14 20:26:36 ha-579393 kubelet[673]: E1014 20:26:36.210807     673 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 14 20:26:36 ha-579393 kubelet[673]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 14 20:26:36 ha-579393 kubelet[673]:  > podSandboxID="40a848371c8bef14930b9f966b4dc01354548664bdbcb8da11cd52ea7c29b8b8"
	Oct 14 20:26:36 ha-579393 kubelet[673]: E1014 20:26:36.210945     673 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 14 20:26:36 ha-579393 kubelet[673]:         container kube-scheduler start failed in pod kube-scheduler-ha-579393_kube-system(8c15ab9dd5834e64ae44874faddf585d): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 14 20:26:36 ha-579393 kubelet[673]:  > logger="UnhandledError"
	Oct 14 20:26:36 ha-579393 kubelet[673]: E1014 20:26:36.210985     673 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-ha-579393" podUID="8c15ab9dd5834e64ae44874faddf585d"
	Oct 14 20:26:36 ha-579393 kubelet[673]: E1014 20:26:36.798216     673 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-579393.186e75138c8fc15b  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-579393,UID:ha-579393,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-579393 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-579393,},FirstTimestamp:2025-10-14 20:20:31.171502427 +0000 UTC m=+0.079567776,LastTimestamp:2025-10-14 20:20:31.171502427 +0000 UTC m=+0.079567776,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-579393,}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-579393 -n ha-579393
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-579393 -n ha-579393: exit status 2 (306.386849ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "ha-579393" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.65s)

                                                
                                    
x
+
TestJSONOutput/start/Command (497.63s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-239279 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
E1014 20:29:12.807360  417373 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:34:12.807040  417373 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-239279 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: exit status 80 (8m17.628792282s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"676feae5-c63d-4c17-a745-9c78783c5f08","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-239279] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"6dd03b98-45f1-4834-b9fe-b85cee1112f8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21409"}}
	{"specversion":"1.0","id":"90088af3-3f47-451c-a1f4-b4a443ca2ea7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"f6725ee9-d50d-4555-a3ec-24553f557f14","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21409-413763/kubeconfig"}}
	{"specversion":"1.0","id":"6672d3a5-f076-4d7b-9ece-b036d5fbf048","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-413763/.minikube"}}
	{"specversion":"1.0","id":"ee9edd8e-e4c5-4e68-a855-fb253b022663","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"9bb25839-4e77-414e-a53a-c955e3f17ec6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"16cfc8d5-ada7-44e2-ac50-9a4bd9fa49d6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"660f285b-13de-410f-ab28-497ecf42e7e5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"23dcebd1-0269-4d1d-9426-c66222a5427d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-239279\" primary control-plane node in \"json-output-239279\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"323703b6-62ff-4001-80af-1d0a38211419","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1759745255-21703 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"da446ef8-5bdb-4c38-9824-1276945a22e3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"0fc4400f-346a-4e8e-b567-3ff6150d533d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"11","message":"Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...","name":"Preparing Kubernetes","totalsteps":"19"}}
	{"specversion":"1.0","id":"e067a95c-d804-40b6-8d91-8ffeab4a68b5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"12","message":"Generating certificates and keys ...","name":"Generating certificates","totalsteps":"19"}}
	{"specversion":"1.0","id":"9db8cc41-c690-4141-b669-f23eb8e6c566","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"13","message":"Booting up control plane ...","name":"Booting control plane","totalsteps":"19"}}
	{"specversion":"1.0","id":"b4613c88-1791-4ad9-9f48-b31b61510ef4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"initialization failed, will try again: wait: sudo /bin/bash -c \"env PATH=\"/var/lib/minikube/binaries/v1.34.1:$PATH\" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables\": Process exited with status 1\nstdout:\n[init] Using Kubernetes version: v1.34.1\n[preflight] Running pre-flight checks\n[preflight] The system verification failed. Pri
nting the output from the verification:\n\u001b[0;37mKERNEL_VERSION\u001b[0m: \u001b[0;32m6.8.0-1041-gcp\u001b[0m\n\u001b[0;37mOS\u001b[0m: \u001b[0;32mLinux\u001b[0m\n\u001b[0;37mCGROUPS_CPU\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_CPUSET\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_DEVICES\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_FREEZER\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_MEMORY\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_PIDS\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_HUGETLB\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_IO\u001b[0m: \u001b[0;32menabled\u001b[0m\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action beforehand using 'kubeadm config images pull'\n[certs] Using certificateDir folder \"/var/lib/minikube/certs\"\
n[certs] Using existing ca certificate authority\n[certs] Using existing apiserver certificate and key on disk\n[certs] Generating \"apiserver-kubelet-client\" certificate and key\n[certs] Generating \"front-proxy-ca\" certificate and key\n[certs] Generating \"front-proxy-client\" certificate and key\n[certs] Generating \"etcd/ca\" certificate and key\n[certs] Generating \"etcd/server\" certificate and key\n[certs] etcd/server serving cert is signed for DNS names [json-output-239279 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]\n[certs] Generating \"etcd/peer\" certificate and key\n[certs] etcd/peer serving cert is signed for DNS names [json-output-239279 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]\n[certs] Generating \"etcd/healthcheck-client\" certificate and key\n[certs] Generating \"apiserver-etcd-client\" certificate and key\n[certs] Generating \"sa\" key and public key\n[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"\n[kubeconfig] Writing \"admin.conf\" kubeconfig file\n[kubeconfig] Writi
ng \"super-admin.conf\" kubeconfig file\n[kubeconfig] Writing \"kubelet.conf\" kubeconfig file\n[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file\n[kubeconfig] Writing \"scheduler.conf\" kubeconfig file\n[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\"\n[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"\n[control-plane] Creating static Pod manifest for \"kube-apiserver\"\n[control-plane] Creating static Pod manifest for \"kube-controller-manager\"\n[control-plane] Creating static Pod manifest for \"kube-scheduler\"\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/instance-config.yaml\"\n[patches] Applied patch of type \"application/strategic-merge-patch+json\" to target \"kubeletconfiguration\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Starting the ku
belet\n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\"\n[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s\n[kubelet-check] The kubelet is healthy after 1.00114803s\n[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s\n[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez\n[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz\n[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez\n[control-plane-check] kube-apiserver is not healthy after 4m0.001072902s\n[control-plane-check] kube-scheduler is not healthy after 4m0.001147851s\n[control-plane-check] kube-controller-manager is not healthy after 4m0.001384077s\n\nA control plane component may have crashed or exited when started by the container runtime.\nTo troubleshoot, list all containers using y
our preferred container runtimes CLI.\nHere is one example how you may list all running Kubernetes containers by using crictl:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'\n\tOnce you have found the failing container, you can inspect its logs with:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'\n\n\nstderr:\n\t[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: \"configs\", output: \"modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\\n\", err: exit status 1\n\t[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'\nerror: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get \"https://control-plane.minikube.internal:8443/livez?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused
, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get \"https://127.0.0.1:10257/healthz\": dial tcp 127.0.0.1:10257: connect: connection refused]\nTo see the stack trace of this error execute with --v=5 or higher"}}
	{"specversion":"1.0","id":"d78127e5-34b5-4070-8107-a10ed0fc784c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"12","message":"Generating certificates and keys ...","name":"Generating certificates","totalsteps":"19"}}
	{"specversion":"1.0","id":"65c96569-382d-40f6-a0fc-4fb0726fccd3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"13","message":"Booting up control plane ...","name":"Booting control plane","totalsteps":"19"}}
	{"specversion":"1.0","id":"13cdc106-bb12-4a80-8223-13ba2c559647","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Error starting cluster: wait: sudo /bin/bash -c \"env PATH=\"/var/lib/minikube/binaries/v1.34.1:$PATH\" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables\": Process exited with status 1\nstdout:\n[init] Using Kubernetes version: v1.34.1\n[preflight] Running pre-flight checks\n[preflight] The system verification failed. Printing the outpu
t from the verification:\n\u001b[0;37mKERNEL_VERSION\u001b[0m: \u001b[0;32m6.8.0-1041-gcp\u001b[0m\n\u001b[0;37mOS\u001b[0m: \u001b[0;32mLinux\u001b[0m\n\u001b[0;37mCGROUPS_CPU\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_CPUSET\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_DEVICES\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_FREEZER\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_MEMORY\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_PIDS\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_HUGETLB\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_IO\u001b[0m: \u001b[0;32menabled\u001b[0m\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action beforehand using 'kubeadm config images pull'\n[certs] Using certificateDir folder \"/var/lib/minikube/certs\"\n[certs] Using
existing ca certificate authority\n[certs] Using existing apiserver certificate and key on disk\n[certs] Using existing apiserver-kubelet-client certificate and key on disk\n[certs] Using existing front-proxy-ca certificate authority\n[certs] Using existing front-proxy-client certificate and key on disk\n[certs] Using existing etcd/ca certificate authority\n[certs] Using existing etcd/server certificate and key on disk\n[certs] Using existing etcd/peer certificate and key on disk\n[certs] Using existing etcd/healthcheck-client certificate and key on disk\n[certs] Using existing apiserver-etcd-client certificate and key on disk\n[certs] Using the existing \"sa\" key\n[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"\n[kubeconfig] Writing \"admin.conf\" kubeconfig file\n[kubeconfig] Writing \"super-admin.conf\" kubeconfig file\n[kubeconfig] Writing \"kubelet.conf\" kubeconfig file\n[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file\n[kubeconfig] Writing \"scheduler.conf\" kubeconfig file\n[
etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\"\n[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"\n[control-plane] Creating static Pod manifest for \"kube-apiserver\"\n[control-plane] Creating static Pod manifest for \"kube-controller-manager\"\n[control-plane] Creating static Pod manifest for \"kube-scheduler\"\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/instance-config.yaml\"\n[patches] Applied patch of type \"application/strategic-merge-patch+json\" to target \"kubeletconfiguration\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Starting the kubelet\n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\"\n[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/health
z. This can take up to 4m0s\n[kubelet-check] The kubelet is healthy after 502.025865ms\n[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s\n[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez\n[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz\n[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez\n[control-plane-check] kube-controller-manager is not healthy after 4m0.001036937s\n[control-plane-check] kube-apiserver is not healthy after 4m0.001248368s\n[control-plane-check] kube-scheduler is not healthy after 4m0.001378292s\n\nA control plane component may have crashed or exited when started by the container runtime.\nTo troubleshoot, list all containers using your preferred container runtimes CLI.\nHere is one example how you may list all running Kubernetes containers by using crictl:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v p
ause'\n\tOnce you have found the failing container, you can inspect its logs with:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'\n\n\nstderr:\n\t[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: \"configs\", output: \"modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\\n\", err: exit status 1\n\t[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'\nerror: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get \"https://127.0.0.1:10257/healthz\": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get \"https://127.0.0.1:
10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused]\nTo see the stack trace of this error execute with --v=5 or higher"}}
	{"specversion":"1.0","id":"d79ca530-be03-4396-a78a-e520aee7da0e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"failed to start node: wait: sudo /bin/bash -c \"env PATH=\"/var/lib/minikube/binaries/v1.34.1:$PATH\" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables\": Process exited with status 1\nstdout:\n[init] Using Kubernetes version: v1.34.1\n[preflight] Running pre-flight checks\n[preflight] The system v
erification failed. Printing the output from the verification:\n\u001b[0;37mKERNEL_VERSION\u001b[0m: \u001b[0;32m6.8.0-1041-gcp\u001b[0m\n\u001b[0;37mOS\u001b[0m: \u001b[0;32mLinux\u001b[0m\n\u001b[0;37mCGROUPS_CPU\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_CPUSET\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_DEVICES\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_FREEZER\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_MEMORY\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_PIDS\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_HUGETLB\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_IO\u001b[0m: \u001b[0;32menabled\u001b[0m\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action beforehand using 'kubeadm config images pull'\n[certs] Using certificateDir folder \"/va
r/lib/minikube/certs\"\n[certs] Using existing ca certificate authority\n[certs] Using existing apiserver certificate and key on disk\n[certs] Using existing apiserver-kubelet-client certificate and key on disk\n[certs] Using existing front-proxy-ca certificate authority\n[certs] Using existing front-proxy-client certificate and key on disk\n[certs] Using existing etcd/ca certificate authority\n[certs] Using existing etcd/server certificate and key on disk\n[certs] Using existing etcd/peer certificate and key on disk\n[certs] Using existing etcd/healthcheck-client certificate and key on disk\n[certs] Using existing apiserver-etcd-client certificate and key on disk\n[certs] Using the existing \"sa\" key\n[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"\n[kubeconfig] Writing \"admin.conf\" kubeconfig file\n[kubeconfig] Writing \"super-admin.conf\" kubeconfig file\n[kubeconfig] Writing \"kubelet.conf\" kubeconfig file\n[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file\n[kubeconfig] Writing
\"scheduler.conf\" kubeconfig file\n[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\"\n[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"\n[control-plane] Creating static Pod manifest for \"kube-apiserver\"\n[control-plane] Creating static Pod manifest for \"kube-controller-manager\"\n[control-plane] Creating static Pod manifest for \"kube-scheduler\"\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/instance-config.yaml\"\n[patches] Applied patch of type \"application/strategic-merge-patch+json\" to target \"kubeletconfiguration\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Starting the kubelet\n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\"\n[kubelet-check] Waiting for a healthy ku
belet at http://127.0.0.1:10248/healthz. This can take up to 4m0s\n[kubelet-check] The kubelet is healthy after 502.025865ms\n[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s\n[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez\n[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz\n[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez\n[control-plane-check] kube-controller-manager is not healthy after 4m0.001036937s\n[control-plane-check] kube-apiserver is not healthy after 4m0.001248368s\n[control-plane-check] kube-scheduler is not healthy after 4m0.001378292s\n\nA control plane component may have crashed or exited when started by the container runtime.\nTo troubleshoot, list all containers using your preferred container runtimes CLI.\nHere is one example how you may list all running Kubernetes containers by using crictl:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/c
rio.sock ps -a | grep kube | grep -v pause'\n\tOnce you have found the failing container, you can inspect its logs with:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'\n\n\nstderr:\n\t[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: \"configs\", output: \"modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\\n\", err: exit status 1\n\t[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'\nerror: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get \"https://127.0.0.1:10257/healthz\": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1
:10259/livez: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused]\nTo see the stack trace of this error execute with --v=5 or higher","name":"GUEST_START","url":""}}
	{"specversion":"1.0","id":"e298ee38-858d-46e3-90bf-f1286c9a61ee","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 start -p json-output-239279 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio": exit status 80
--- FAIL: TestJSONOutput/start/Command (497.63s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
json_output_test.go:114: step 12 has already been assigned to another step:
Generating certificates and keys ...
Cannot use for:
Generating certificates and keys ...
[Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 676feae5-c63d-4c17-a745-9c78783c5f08
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "[json-output-239279] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)",
"name": "Initial Minikube Setup",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 6dd03b98-45f1-4834-b9fe-b85cee1112f8
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_LOCATION=21409"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 90088af3-3f47-451c-a1f4-b4a443ca2ea7
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: f6725ee9-d50d-4555-a3ec-24553f557f14
datacontenttype: application/json
Data,
{
"message": "KUBECONFIG=/home/jenkins/minikube-integration/21409-413763/kubeconfig"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 6672d3a5-f076-4d7b-9ece-b036d5fbf048
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-413763/.minikube"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: ee9edd8e-e4c5-4e68-a855-fb253b022663
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_BIN=out/minikube-linux-amd64"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 9bb25839-4e77-414e-a53a-c955e3f17ec6
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_FORCE_SYSTEMD="
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 16cfc8d5-ada7-44e2-ac50-9a4bd9fa49d6
datacontenttype: application/json
Data,
{
"currentstep": "1",
"message": "Using the docker driver based on user configuration",
"name": "Selecting Driver",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 660f285b-13de-410f-ab28-497ecf42e7e5
datacontenttype: application/json
Data,
{
"message": "Using Docker driver with root privileges"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 23dcebd1-0269-4d1d-9426-c66222a5427d
datacontenttype: application/json
Data,
{
"currentstep": "3",
"message": "Starting \"json-output-239279\" primary control-plane node in \"json-output-239279\" cluster",
"name": "Starting Node",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 323703b6-62ff-4001-80af-1d0a38211419
datacontenttype: application/json
Data,
{
"currentstep": "5",
"message": "Pulling base image v0.0.48-1759745255-21703 ...",
"name": "Pulling Base Image",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: da446ef8-5bdb-4c38-9824-1276945a22e3
datacontenttype: application/json
Data,
{
"currentstep": "8",
"message": "Creating docker container (CPUs=2, Memory=3072MB) ...",
"name": "Creating Container",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 0fc4400f-346a-4e8e-b567-3ff6150d533d
datacontenttype: application/json
Data,
{
"currentstep": "11",
"message": "Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...",
"name": "Preparing Kubernetes",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: e067a95c-d804-40b6-8d91-8ffeab4a68b5
datacontenttype: application/json
Data,
{
"currentstep": "12",
"message": "Generating certificates and keys ...",
"name": "Generating certificates",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 9db8cc41-c690-4141-b669-f23eb8e6c566
datacontenttype: application/json
Data,
{
"currentstep": "13",
"message": "Booting up control plane ...",
"name": "Booting control plane",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: b4613c88-1791-4ad9-9f48-b31b61510ef4
datacontenttype: application/json
Data,
{
"message": "initialization failed, will try again: wait: sudo /bin/bash -c \"env PATH=\"/var/lib/minikube/binaries/v1.34.1:$PATH\" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables\": Process exited with status 1\nstdout:\n[init] Using Kubernetes version: v1.34.1\n[preflight] Running pre-flight checks\n[preflight] The system verification failed. Printing the output from the verification:\n\u001b[0;37mKERNEL_VERSION\u001b[0m: \u001b[0;32m6.8.0-1041-gcp\u001b[0m\n\u001b[0;37mOS\u001b[0m: \u001b[0;32mLinux\u001b[0m\n\u001b[0;37mCGR
OUPS_CPU\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_CPUSET\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_DEVICES\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_FREEZER\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_MEMORY\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_PIDS\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_HUGETLB\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_IO\u001b[0m: \u001b[0;32menabled\u001b[0m\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action beforehand using 'kubeadm config images pull'\n[certs] Using certificateDir folder \"/var/lib/minikube/certs\"\n[certs] Using existing ca certificate authority\n[certs] Using existing apiserver certificate and key on disk\n[certs] Generating \"apiserver-kubelet-client\" certificate and key\n[c
erts] Generating \"front-proxy-ca\" certificate and key\n[certs] Generating \"front-proxy-client\" certificate and key\n[certs] Generating \"etcd/ca\" certificate and key\n[certs] Generating \"etcd/server\" certificate and key\n[certs] etcd/server serving cert is signed for DNS names [json-output-239279 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]\n[certs] Generating \"etcd/peer\" certificate and key\n[certs] etcd/peer serving cert is signed for DNS names [json-output-239279 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]\n[certs] Generating \"etcd/healthcheck-client\" certificate and key\n[certs] Generating \"apiserver-etcd-client\" certificate and key\n[certs] Generating \"sa\" key and public key\n[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"\n[kubeconfig] Writing \"admin.conf\" kubeconfig file\n[kubeconfig] Writing \"super-admin.conf\" kubeconfig file\n[kubeconfig] Writing \"kubelet.conf\" kubeconfig file\n[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file\n[kubeconfig] Writing
\"scheduler.conf\" kubeconfig file\n[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\"\n[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"\n[control-plane] Creating static Pod manifest for \"kube-apiserver\"\n[control-plane] Creating static Pod manifest for \"kube-controller-manager\"\n[control-plane] Creating static Pod manifest for \"kube-scheduler\"\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/instance-config.yaml\"\n[patches] Applied patch of type \"application/strategic-merge-patch+json\" to target \"kubeletconfiguration\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Starting the kubelet\n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\"\n[kubelet-check] Waiting for a healthy kub
elet at http://127.0.0.1:10248/healthz. This can take up to 4m0s\n[kubelet-check] The kubelet is healthy after 1.00114803s\n[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s\n[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez\n[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz\n[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez\n[control-plane-check] kube-apiserver is not healthy after 4m0.001072902s\n[control-plane-check] kube-scheduler is not healthy after 4m0.001147851s\n[control-plane-check] kube-controller-manager is not healthy after 4m0.001384077s\n\nA control plane component may have crashed or exited when started by the container runtime.\nTo troubleshoot, list all containers using your preferred container runtimes CLI.\nHere is one example how you may list all running Kubernetes containers by using crictl:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/cri
o.sock ps -a | grep kube | grep -v pause'\n\tOnce you have found the failing container, you can inspect its logs with:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'\n\n\nstderr:\n\t[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: \"configs\", output: \"modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\\n\", err: exit status 1\n\t[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'\nerror: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get \"https://control-plane.minikube.internal:8443/livez?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager c
heck failed at https://127.0.0.1:10257/healthz: Get \"https://127.0.0.1:10257/healthz\": dial tcp 127.0.0.1:10257: connect: connection refused]\nTo see the stack trace of this error execute with --v=5 or higher"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: d78127e5-34b5-4070-8107-a10ed0fc784c
datacontenttype: application/json
Data,
{
"currentstep": "12",
"message": "Generating certificates and keys ...",
"name": "Generating certificates",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 65c96569-382d-40f6-a0fc-4fb0726fccd3
datacontenttype: application/json
Data,
{
"currentstep": "13",
"message": "Booting up control plane ...",
"name": "Booting control plane",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: 13cdc106-bb12-4a80-8223-13ba2c559647
datacontenttype: application/json
Data,
{
"message": "Error starting cluster: wait: sudo /bin/bash -c \"env PATH=\"/var/lib/minikube/binaries/v1.34.1:$PATH\" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables\": Process exited with status 1\nstdout:\n[init] Using Kubernetes version: v1.34.1\n[preflight] Running pre-flight checks\n[preflight] The system verification failed. Printing the output from the verification:\n\u001b[0;37mKERNEL_VERSION\u001b[0m: \u001b[0;32m6.8.0-1041-gcp\u001b[0m\n\u001b[0;37mOS\u001b[0m: \u001b[0;32mLinux\u001b[0m\n\u001b[0;37mCGROUPS_CPU\u001b[
0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_CPUSET\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_DEVICES\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_FREEZER\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_MEMORY\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_PIDS\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_HUGETLB\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_IO\u001b[0m: \u001b[0;32menabled\u001b[0m\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action beforehand using 'kubeadm config images pull'\n[certs] Using certificateDir folder \"/var/lib/minikube/certs\"\n[certs] Using existing ca certificate authority\n[certs] Using existing apiserver certificate and key on disk\n[certs] Using existing apiserver-kubelet-client certificate and key on disk\n[certs] U
sing existing front-proxy-ca certificate authority\n[certs] Using existing front-proxy-client certificate and key on disk\n[certs] Using existing etcd/ca certificate authority\n[certs] Using existing etcd/server certificate and key on disk\n[certs] Using existing etcd/peer certificate and key on disk\n[certs] Using existing etcd/healthcheck-client certificate and key on disk\n[certs] Using existing apiserver-etcd-client certificate and key on disk\n[certs] Using the existing \"sa\" key\n[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"\n[kubeconfig] Writing \"admin.conf\" kubeconfig file\n[kubeconfig] Writing \"super-admin.conf\" kubeconfig file\n[kubeconfig] Writing \"kubelet.conf\" kubeconfig file\n[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file\n[kubeconfig] Writing \"scheduler.conf\" kubeconfig file\n[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\"\n[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"\n[control-plane] Creating stati
c Pod manifest for \"kube-apiserver\"\n[control-plane] Creating static Pod manifest for \"kube-controller-manager\"\n[control-plane] Creating static Pod manifest for \"kube-scheduler\"\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/instance-config.yaml\"\n[patches] Applied patch of type \"application/strategic-merge-patch+json\" to target \"kubeletconfiguration\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Starting the kubelet\n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\"\n[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s\n[kubelet-check] The kubelet is healthy after 502.025865ms\n[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s\n[
control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez\n[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz\n[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez\n[control-plane-check] kube-controller-manager is not healthy after 4m0.001036937s\n[control-plane-check] kube-apiserver is not healthy after 4m0.001248368s\n[control-plane-check] kube-scheduler is not healthy after 4m0.001378292s\n\nA control plane component may have crashed or exited when started by the container runtime.\nTo troubleshoot, list all containers using your preferred container runtimes CLI.\nHere is one example how you may list all running Kubernetes containers by using crictl:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'\n\tOnce you have found the failing container, you can inspect its logs with:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'\n\n\nstderr:\n\t[WA
RNING SystemVerification]: failed to parse kernel config: unable to load kernel module: \"configs\", output: \"modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\\n\", err: exit status 1\n\t[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'\nerror: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get \"https://127.0.0.1:10257/healthz\": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused]\nTo see the stack trace of this error execute with --v=5 or higher"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: d79ca530-be03-4396-a78a-e520aee7da0e
datacontenttype: application/json
Data,
{
"advice": "",
"exitcode": "80",
"issues": "",
"message": "failed to start node: wait: sudo /bin/bash -c \"env PATH=\"/var/lib/minikube/binaries/v1.34.1:$PATH\" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables\": Process exited with status 1\nstdout:\n[init] Using Kubernetes version: v1.34.1\n[preflight] Running pre-flight checks\n[preflight] The system verification failed. Printing the output from the verification:\n\u001b[0;37mKERNEL_VERSION\u001b[0m: \u001b[0;32m6.8.0-1041-gcp\u001b[0m\n\u001b[0;37mOS\u001b[0m: \u001b[0;32mLinux\u001b[0m\n\u001b[0;37mCGROUPS_CPU\u001b[0m
: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_CPUSET\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_DEVICES\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_FREEZER\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_MEMORY\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_PIDS\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_HUGETLB\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_IO\u001b[0m: \u001b[0;32menabled\u001b[0m\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action beforehand using 'kubeadm config images pull'\n[certs] Using certificateDir folder \"/var/lib/minikube/certs\"\n[certs] Using existing ca certificate authority\n[certs] Using existing apiserver certificate and key on disk\n[certs] Using existing apiserver-kubelet-client certificate and key on disk\n[certs] Usi
ng existing front-proxy-ca certificate authority\n[certs] Using existing front-proxy-client certificate and key on disk\n[certs] Using existing etcd/ca certificate authority\n[certs] Using existing etcd/server certificate and key on disk\n[certs] Using existing etcd/peer certificate and key on disk\n[certs] Using existing etcd/healthcheck-client certificate and key on disk\n[certs] Using existing apiserver-etcd-client certificate and key on disk\n[certs] Using the existing \"sa\" key\n[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"\n[kubeconfig] Writing \"admin.conf\" kubeconfig file\n[kubeconfig] Writing \"super-admin.conf\" kubeconfig file\n[kubeconfig] Writing \"kubelet.conf\" kubeconfig file\n[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file\n[kubeconfig] Writing \"scheduler.conf\" kubeconfig file\n[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\"\n[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"\n[control-plane] Creating static
Pod manifest for \"kube-apiserver\"\n[control-plane] Creating static Pod manifest for \"kube-controller-manager\"\n[control-plane] Creating static Pod manifest for \"kube-scheduler\"\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/instance-config.yaml\"\n[patches] Applied patch of type \"application/strategic-merge-patch+json\" to target \"kubeletconfiguration\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Starting the kubelet\n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\"\n[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s\n[kubelet-check] The kubelet is healthy after 502.025865ms\n[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s\n[co
ntrol-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez\n[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz\n[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez\n[control-plane-check] kube-controller-manager is not healthy after 4m0.001036937s\n[control-plane-check] kube-apiserver is not healthy after 4m0.001248368s\n[control-plane-check] kube-scheduler is not healthy after 4m0.001378292s\n\nA control plane component may have crashed or exited when started by the container runtime.\nTo troubleshoot, list all containers using your preferred container runtimes CLI.\nHere is one example how you may list all running Kubernetes containers by using crictl:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'\n\tOnce you have found the failing container, you can inspect its logs with:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'\n\n\nstderr:\n\t[WARN
ING SystemVerification]: failed to parse kernel config: unable to load kernel module: \"configs\", output: \"modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\\n\", err: exit status 1\n\t[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'\nerror: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get \"https://127.0.0.1:10257/healthz\": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused]\nTo see the stack trace of this error execute with --v=5 or higher",
"name": "GUEST_START",
"url": ""
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: e298ee38-858d-46e3-90bf-f1286c9a61ee
datacontenttype: application/json
Data,
{
"message": "╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│                                                                                           │\n╰────────────────────────────────────────
───────────────────────────────────────────────────╯"
}
]
--- FAIL: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
json_output_test.go:144: current step is not in increasing order: [Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 676feae5-c63d-4c17-a745-9c78783c5f08
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "[json-output-239279] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)",
"name": "Initial Minikube Setup",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 6dd03b98-45f1-4834-b9fe-b85cee1112f8
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_LOCATION=21409"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 90088af3-3f47-451c-a1f4-b4a443ca2ea7
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: f6725ee9-d50d-4555-a3ec-24553f557f14
datacontenttype: application/json
Data,
{
"message": "KUBECONFIG=/home/jenkins/minikube-integration/21409-413763/kubeconfig"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 6672d3a5-f076-4d7b-9ece-b036d5fbf048
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-413763/.minikube"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: ee9edd8e-e4c5-4e68-a855-fb253b022663
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_BIN=out/minikube-linux-amd64"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 9bb25839-4e77-414e-a53a-c955e3f17ec6
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_FORCE_SYSTEMD="
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 16cfc8d5-ada7-44e2-ac50-9a4bd9fa49d6
datacontenttype: application/json
Data,
{
"currentstep": "1",
"message": "Using the docker driver based on user configuration",
"name": "Selecting Driver",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 660f285b-13de-410f-ab28-497ecf42e7e5
datacontenttype: application/json
Data,
{
"message": "Using Docker driver with root privileges"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 23dcebd1-0269-4d1d-9426-c66222a5427d
datacontenttype: application/json
Data,
{
"currentstep": "3",
"message": "Starting \"json-output-239279\" primary control-plane node in \"json-output-239279\" cluster",
"name": "Starting Node",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 323703b6-62ff-4001-80af-1d0a38211419
datacontenttype: application/json
Data,
{
"currentstep": "5",
"message": "Pulling base image v0.0.48-1759745255-21703 ...",
"name": "Pulling Base Image",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: da446ef8-5bdb-4c38-9824-1276945a22e3
datacontenttype: application/json
Data,
{
"currentstep": "8",
"message": "Creating docker container (CPUs=2, Memory=3072MB) ...",
"name": "Creating Container",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 0fc4400f-346a-4e8e-b567-3ff6150d533d
datacontenttype: application/json
Data,
{
"currentstep": "11",
"message": "Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...",
"name": "Preparing Kubernetes",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: e067a95c-d804-40b6-8d91-8ffeab4a68b5
datacontenttype: application/json
Data,
{
"currentstep": "12",
"message": "Generating certificates and keys ...",
"name": "Generating certificates",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 9db8cc41-c690-4141-b669-f23eb8e6c566
datacontenttype: application/json
Data,
{
"currentstep": "13",
"message": "Booting up control plane ...",
"name": "Booting control plane",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: b4613c88-1791-4ad9-9f48-b31b61510ef4
datacontenttype: application/json
Data,
{
"message": "initialization failed, will try again: wait: sudo /bin/bash -c \"env PATH=\"/var/lib/minikube/binaries/v1.34.1:$PATH\" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables\": Process exited with status 1\nstdout:\n[init] Using Kubernetes version: v1.34.1\n[preflight] Running pre-flight checks\n[preflight] The system verification failed. Printing the output from the verification:\n\u001b[0;37mKERNEL_VERSION\u001b[0m: \u001b[0;32m6.8.0-1041-gcp\u001b[0m\n\u001b[0;37mOS\u001b[0m: \u001b[0;32mLinux\u001b[0m\n\u001b[0;37mCGR
OUPS_CPU\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_CPUSET\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_DEVICES\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_FREEZER\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_MEMORY\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_PIDS\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_HUGETLB\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_IO\u001b[0m: \u001b[0;32menabled\u001b[0m\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action beforehand using 'kubeadm config images pull'\n[certs] Using certificateDir folder \"/var/lib/minikube/certs\"\n[certs] Using existing ca certificate authority\n[certs] Using existing apiserver certificate and key on disk\n[certs] Generating \"apiserver-kubelet-client\" certificate and key\n[c
erts] Generating \"front-proxy-ca\" certificate and key\n[certs] Generating \"front-proxy-client\" certificate and key\n[certs] Generating \"etcd/ca\" certificate and key\n[certs] Generating \"etcd/server\" certificate and key\n[certs] etcd/server serving cert is signed for DNS names [json-output-239279 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]\n[certs] Generating \"etcd/peer\" certificate and key\n[certs] etcd/peer serving cert is signed for DNS names [json-output-239279 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]\n[certs] Generating \"etcd/healthcheck-client\" certificate and key\n[certs] Generating \"apiserver-etcd-client\" certificate and key\n[certs] Generating \"sa\" key and public key\n[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"\n[kubeconfig] Writing \"admin.conf\" kubeconfig file\n[kubeconfig] Writing \"super-admin.conf\" kubeconfig file\n[kubeconfig] Writing \"kubelet.conf\" kubeconfig file\n[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file\n[kubeconfig] Writing
\"scheduler.conf\" kubeconfig file\n[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\"\n[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"\n[control-plane] Creating static Pod manifest for \"kube-apiserver\"\n[control-plane] Creating static Pod manifest for \"kube-controller-manager\"\n[control-plane] Creating static Pod manifest for \"kube-scheduler\"\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/instance-config.yaml\"\n[patches] Applied patch of type \"application/strategic-merge-patch+json\" to target \"kubeletconfiguration\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Starting the kubelet\n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\"\n[kubelet-check] Waiting for a healthy kub
elet at http://127.0.0.1:10248/healthz. This can take up to 4m0s\n[kubelet-check] The kubelet is healthy after 1.00114803s\n[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s\n[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez\n[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz\n[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez\n[control-plane-check] kube-apiserver is not healthy after 4m0.001072902s\n[control-plane-check] kube-scheduler is not healthy after 4m0.001147851s\n[control-plane-check] kube-controller-manager is not healthy after 4m0.001384077s\n\nA control plane component may have crashed or exited when started by the container runtime.\nTo troubleshoot, list all containers using your preferred container runtimes CLI.\nHere is one example how you may list all running Kubernetes containers by using crictl:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/cri
o.sock ps -a | grep kube | grep -v pause'\n\tOnce you have found the failing container, you can inspect its logs with:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'\n\n\nstderr:\n\t[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: \"configs\", output: \"modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\\n\", err: exit status 1\n\t[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'\nerror: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get \"https://control-plane.minikube.internal:8443/livez?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager c
heck failed at https://127.0.0.1:10257/healthz: Get \"https://127.0.0.1:10257/healthz\": dial tcp 127.0.0.1:10257: connect: connection refused]\nTo see the stack trace of this error execute with --v=5 or higher"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: d78127e5-34b5-4070-8107-a10ed0fc784c
datacontenttype: application/json
Data,
{
"currentstep": "12",
"message": "Generating certificates and keys ...",
"name": "Generating certificates",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 65c96569-382d-40f6-a0fc-4fb0726fccd3
datacontenttype: application/json
Data,
{
"currentstep": "13",
"message": "Booting up control plane ...",
"name": "Booting control plane",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: 13cdc106-bb12-4a80-8223-13ba2c559647
datacontenttype: application/json
Data,
{
"message": "Error starting cluster: wait: sudo /bin/bash -c \"env PATH=\"/var/lib/minikube/binaries/v1.34.1:$PATH\" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables\": Process exited with status 1\nstdout:\n[init] Using Kubernetes version: v1.34.1\n[preflight] Running pre-flight checks\n[preflight] The system verification failed. Printing the output from the verification:\n\u001b[0;37mKERNEL_VERSION\u001b[0m: \u001b[0;32m6.8.0-1041-gcp\u001b[0m\n\u001b[0;37mOS\u001b[0m: \u001b[0;32mLinux\u001b[0m\n\u001b[0;37mCGROUPS_CPU\u001b[
0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_CPUSET\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_DEVICES\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_FREEZER\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_MEMORY\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_PIDS\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_HUGETLB\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_IO\u001b[0m: \u001b[0;32menabled\u001b[0m\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action beforehand using 'kubeadm config images pull'\n[certs] Using certificateDir folder \"/var/lib/minikube/certs\"\n[certs] Using existing ca certificate authority\n[certs] Using existing apiserver certificate and key on disk\n[certs] Using existing apiserver-kubelet-client certificate and key on disk\n[certs] U
sing existing front-proxy-ca certificate authority\n[certs] Using existing front-proxy-client certificate and key on disk\n[certs] Using existing etcd/ca certificate authority\n[certs] Using existing etcd/server certificate and key on disk\n[certs] Using existing etcd/peer certificate and key on disk\n[certs] Using existing etcd/healthcheck-client certificate and key on disk\n[certs] Using existing apiserver-etcd-client certificate and key on disk\n[certs] Using the existing \"sa\" key\n[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"\n[kubeconfig] Writing \"admin.conf\" kubeconfig file\n[kubeconfig] Writing \"super-admin.conf\" kubeconfig file\n[kubeconfig] Writing \"kubelet.conf\" kubeconfig file\n[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file\n[kubeconfig] Writing \"scheduler.conf\" kubeconfig file\n[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\"\n[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"\n[control-plane] Creating stati
c Pod manifest for \"kube-apiserver\"\n[control-plane] Creating static Pod manifest for \"kube-controller-manager\"\n[control-plane] Creating static Pod manifest for \"kube-scheduler\"\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/instance-config.yaml\"\n[patches] Applied patch of type \"application/strategic-merge-patch+json\" to target \"kubeletconfiguration\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Starting the kubelet\n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\"\n[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s\n[kubelet-check] The kubelet is healthy after 502.025865ms\n[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s\n[
control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez\n[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz\n[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez\n[control-plane-check] kube-controller-manager is not healthy after 4m0.001036937s\n[control-plane-check] kube-apiserver is not healthy after 4m0.001248368s\n[control-plane-check] kube-scheduler is not healthy after 4m0.001378292s\n\nA control plane component may have crashed or exited when started by the container runtime.\nTo troubleshoot, list all containers using your preferred container runtimes CLI.\nHere is one example how you may list all running Kubernetes containers by using crictl:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'\n\tOnce you have found the failing container, you can inspect its logs with:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'\n\n\nstderr:\n\t[WA
RNING SystemVerification]: failed to parse kernel config: unable to load kernel module: \"configs\", output: \"modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\\n\", err: exit status 1\n\t[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'\nerror: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get \"https://127.0.0.1:10257/healthz\": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused]\nTo see the stack trace of this error execute with --v=5 or higher"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: d79ca530-be03-4396-a78a-e520aee7da0e
datacontenttype: application/json
Data,
{
"advice": "",
"exitcode": "80",
"issues": "",
"message": "failed to start node: wait: sudo /bin/bash -c \"env PATH=\"/var/lib/minikube/binaries/v1.34.1:$PATH\" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables\": Process exited with status 1\nstdout:\n[init] Using Kubernetes version: v1.34.1\n[preflight] Running pre-flight checks\n[preflight] The system verification failed. Printing the output from the verification:\n\u001b[0;37mKERNEL_VERSION\u001b[0m: \u001b[0;32m6.8.0-1041-gcp\u001b[0m\n\u001b[0;37mOS\u001b[0m: \u001b[0;32mLinux\u001b[0m\n\u001b[0;37mCGROUPS_CPU\u001b[0m
: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_CPUSET\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_DEVICES\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_FREEZER\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_MEMORY\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_PIDS\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_HUGETLB\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_IO\u001b[0m: \u001b[0;32menabled\u001b[0m\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action beforehand using 'kubeadm config images pull'\n[certs] Using certificateDir folder \"/var/lib/minikube/certs\"\n[certs] Using existing ca certificate authority\n[certs] Using existing apiserver certificate and key on disk\n[certs] Using existing apiserver-kubelet-client certificate and key on disk\n[certs] Usi
ng existing front-proxy-ca certificate authority\n[certs] Using existing front-proxy-client certificate and key on disk\n[certs] Using existing etcd/ca certificate authority\n[certs] Using existing etcd/server certificate and key on disk\n[certs] Using existing etcd/peer certificate and key on disk\n[certs] Using existing etcd/healthcheck-client certificate and key on disk\n[certs] Using existing apiserver-etcd-client certificate and key on disk\n[certs] Using the existing \"sa\" key\n[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"\n[kubeconfig] Writing \"admin.conf\" kubeconfig file\n[kubeconfig] Writing \"super-admin.conf\" kubeconfig file\n[kubeconfig] Writing \"kubelet.conf\" kubeconfig file\n[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file\n[kubeconfig] Writing \"scheduler.conf\" kubeconfig file\n[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\"\n[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"\n[control-plane] Creating static
Pod manifest for \"kube-apiserver\"\n[control-plane] Creating static Pod manifest for \"kube-controller-manager\"\n[control-plane] Creating static Pod manifest for \"kube-scheduler\"\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/instance-config.yaml\"\n[patches] Applied patch of type \"application/strategic-merge-patch+json\" to target \"kubeletconfiguration\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Starting the kubelet\n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\"\n[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s\n[kubelet-check] The kubelet is healthy after 502.025865ms\n[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s\n[co
ntrol-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez\n[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz\n[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez\n[control-plane-check] kube-controller-manager is not healthy after 4m0.001036937s\n[control-plane-check] kube-apiserver is not healthy after 4m0.001248368s\n[control-plane-check] kube-scheduler is not healthy after 4m0.001378292s\n\nA control plane component may have crashed or exited when started by the container runtime.\nTo troubleshoot, list all containers using your preferred container runtimes CLI.\nHere is one example how you may list all running Kubernetes containers by using crictl:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'\n\tOnce you have found the failing container, you can inspect its logs with:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'\n\n\nstderr:\n\t[WARN
ING SystemVerification]: failed to parse kernel config: unable to load kernel module: \"configs\", output: \"modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\\n\", err: exit status 1\n\t[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'\nerror: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get \"https://127.0.0.1:10257/healthz\": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused]\nTo see the stack trace of this error execute with --v=5 or higher",
"name": "GUEST_START",
"url": ""
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: e298ee38-858d-46e3-90bf-f1286c9a61ee
datacontenttype: application/json
Data,
{
"message": "╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│                                                                                           │\n╰────────────────────────────────────────
───────────────────────────────────────────────────╯"
}
]
--- FAIL: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestMinikubeProfile (503.5s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-826709 --driver=docker  --container-runtime=crio
E1014 20:39:12.806854  417373 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:44:12.807252  417373 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p first-826709 --driver=docker  --container-runtime=crio: exit status 80 (8m19.964125268s)

                                                
                                                
-- stdout --
	* [first-826709] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21409
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21409-413763/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-413763/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "first-826709" primary control-plane node in "first-826709" cluster
	* Pulling base image v0.0.48-1759745255-21703 ...
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [first-826709 localhost] and IPs [192.168.58.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [first-826709 localhost] and IPs [192.168.58.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.939708ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.58.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000436069s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000602955s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000679186s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.58.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.58.2:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001338386s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.58.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000580443s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000633905s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000680683s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.58.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001338386s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.58.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000580443s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000633905s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000680683s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.58.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	* 

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-linux-amd64 start -p first-826709 --driver=docker  --container-runtime=crio": exit status 80
panic.go:636: *** TestMinikubeProfile FAILED at 2025-10-14 20:45:36.723721937 +0000 UTC m=+5455.422858703
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMinikubeProfile]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMinikubeProfile]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect second-829440
helpers_test.go:239: (dbg) Non-zero exit: docker inspect second-829440: exit status 1 (29.929204ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: second-829440

                                                
                                                
** /stderr **
helpers_test.go:241: failed to get docker inspect: exit status 1
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p second-829440 -n second-829440
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p second-829440 -n second-829440: exit status 85 (59.197534ms)

                                                
                                                
-- stdout --
	* Profile "second-829440" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-829440"

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 85 (may be ok)
helpers_test.go:249: "second-829440" host is not running, skipping log retrieval (state="* Profile \"second-829440\" not found. Run \"minikube profile list\" to view all profiles.")
helpers_test.go:175: Cleaning up "second-829440" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-829440
panic.go:636: *** TestMinikubeProfile FAILED at 2025-10-14 20:45:36.969265041 +0000 UTC m=+5455.668401805
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMinikubeProfile]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMinikubeProfile]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect first-826709
helpers_test.go:243: (dbg) docker inspect first-826709:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "766a043830a434ea1e0df9051f99d6d1f5b88dcfd625d8f9fdc17f52de49550c",
	        "Created": "2025-10-14T20:37:22.03907986Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 523088,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-14T20:37:22.075428523Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/766a043830a434ea1e0df9051f99d6d1f5b88dcfd625d8f9fdc17f52de49550c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/766a043830a434ea1e0df9051f99d6d1f5b88dcfd625d8f9fdc17f52de49550c/hostname",
	        "HostsPath": "/var/lib/docker/containers/766a043830a434ea1e0df9051f99d6d1f5b88dcfd625d8f9fdc17f52de49550c/hosts",
	        "LogPath": "/var/lib/docker/containers/766a043830a434ea1e0df9051f99d6d1f5b88dcfd625d8f9fdc17f52de49550c/766a043830a434ea1e0df9051f99d6d1f5b88dcfd625d8f9fdc17f52de49550c-json.log",
	        "Name": "/first-826709",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "first-826709:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "first-826709",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 8388608000,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 16777216000,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "766a043830a434ea1e0df9051f99d6d1f5b88dcfd625d8f9fdc17f52de49550c",
	                "LowerDir": "/var/lib/docker/overlay2/4f4e0ed36c4465cf7cdbd01f8c4ceaecd99e87aab5872028b478f280144608a7-init/diff:/var/lib/docker/overlay2/51cca15559205c73df9571f03495ed8a29f085405673af5fef2c1ba7c695d8af/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4f4e0ed36c4465cf7cdbd01f8c4ceaecd99e87aab5872028b478f280144608a7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4f4e0ed36c4465cf7cdbd01f8c4ceaecd99e87aab5872028b478f280144608a7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4f4e0ed36c4465cf7cdbd01f8c4ceaecd99e87aab5872028b478f280144608a7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "first-826709",
	                "Source": "/var/lib/docker/volumes/first-826709/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "first-826709",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "first-826709",
	                "name.minikube.sigs.k8s.io": "first-826709",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b88028ed04fb4c3b53003d45068de3e5b82437bda0690b6eaaedf1b39c3a0234",
	            "SandboxKey": "/var/run/docker/netns/b88028ed04fb",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32948"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32949"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32952"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32950"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32951"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "first-826709": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "26:2a:b2:85:5a:5b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0db83ba5b596cd026727640e8bb5969fde9a4b1cb6a3eb2f206cd823e3f97635",
	                    "EndpointID": "fd9c2184ff89364c0ba4a279a9e65b774fb95cd845ef1e6c9741b79a86d7104d",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "first-826709",
	                        "766a043830a4"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p first-826709 -n first-826709
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p first-826709 -n first-826709: exit status 6 (314.882977ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1014 20:45:37.289373  527625 status.go:458] kubeconfig endpoint: get endpoint: "first-826709" does not appear in /home/jenkins/minikube-integration/21409-413763/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestMinikubeProfile FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMinikubeProfile]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p first-826709 logs -n 25
helpers_test.go:260: TestMinikubeProfile logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬──────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                          ARGS                                                           │         PROFILE          │   USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼──────────┼─────────┼─────────────────────┼─────────────────────┤
	│ node    │ ha-579393 node delete m03 --alsologtostderr -v 5                                                                        │ ha-579393                │ jenkins  │ v1.37.0 │ 14 Oct 25 20:20 UTC │                     │
	│ stop    │ ha-579393 stop --alsologtostderr -v 5                                                                                   │ ha-579393                │ jenkins  │ v1.37.0 │ 14 Oct 25 20:20 UTC │ 14 Oct 25 20:20 UTC │
	│ start   │ ha-579393 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio                            │ ha-579393                │ jenkins  │ v1.37.0 │ 14 Oct 25 20:20 UTC │                     │
	│ node    │ ha-579393 node add --control-plane --alsologtostderr -v 5                                                               │ ha-579393                │ jenkins  │ v1.37.0 │ 14 Oct 25 20:26 UTC │                     │
	│ delete  │ -p ha-579393                                                                                                            │ ha-579393                │ jenkins  │ v1.37.0 │ 14 Oct 25 20:26 UTC │ 14 Oct 25 20:26 UTC │
	│ start   │ -p json-output-239279 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio │ json-output-239279       │ testUser │ v1.37.0 │ 14 Oct 25 20:26 UTC │                     │
	│ pause   │ -p json-output-239279 --output=json --user=testUser                                                                     │ json-output-239279       │ testUser │ v1.37.0 │ 14 Oct 25 20:34 UTC │ 14 Oct 25 20:34 UTC │
	│ unpause │ -p json-output-239279 --output=json --user=testUser                                                                     │ json-output-239279       │ testUser │ v1.37.0 │ 14 Oct 25 20:34 UTC │ 14 Oct 25 20:35 UTC │
	│ stop    │ -p json-output-239279 --output=json --user=testUser                                                                     │ json-output-239279       │ testUser │ v1.37.0 │ 14 Oct 25 20:35 UTC │ 14 Oct 25 20:35 UTC │
	│ delete  │ -p json-output-239279                                                                                                   │ json-output-239279       │ jenkins  │ v1.37.0 │ 14 Oct 25 20:35 UTC │ 14 Oct 25 20:35 UTC │
	│ start   │ -p json-output-error-802525 --memory=3072 --output=json --wait=true --driver=fail                                       │ json-output-error-802525 │ jenkins  │ v1.37.0 │ 14 Oct 25 20:35 UTC │                     │
	│ delete  │ -p json-output-error-802525                                                                                             │ json-output-error-802525 │ jenkins  │ v1.37.0 │ 14 Oct 25 20:35 UTC │ 14 Oct 25 20:35 UTC │
	│ start   │ -p docker-network-936394 --network=                                                                                     │ docker-network-936394    │ jenkins  │ v1.37.0 │ 14 Oct 25 20:35 UTC │ 14 Oct 25 20:35 UTC │
	│ delete  │ -p docker-network-936394                                                                                                │ docker-network-936394    │ jenkins  │ v1.37.0 │ 14 Oct 25 20:35 UTC │ 14 Oct 25 20:35 UTC │
	│ start   │ -p docker-network-748937 --network=bridge                                                                               │ docker-network-748937    │ jenkins  │ v1.37.0 │ 14 Oct 25 20:35 UTC │ 14 Oct 25 20:35 UTC │
	│ delete  │ -p docker-network-748937                                                                                                │ docker-network-748937    │ jenkins  │ v1.37.0 │ 14 Oct 25 20:35 UTC │ 14 Oct 25 20:36 UTC │
	│ start   │ -p existing-network-316800 --network=existing-network                                                                   │ existing-network-316800  │ jenkins  │ v1.37.0 │ 14 Oct 25 20:36 UTC │ 14 Oct 25 20:36 UTC │
	│ delete  │ -p existing-network-316800                                                                                              │ existing-network-316800  │ jenkins  │ v1.37.0 │ 14 Oct 25 20:36 UTC │ 14 Oct 25 20:36 UTC │
	│ start   │ -p custom-subnet-655424 --subnet=192.168.60.0/24                                                                        │ custom-subnet-655424     │ jenkins  │ v1.37.0 │ 14 Oct 25 20:36 UTC │ 14 Oct 25 20:36 UTC │
	│ delete  │ -p custom-subnet-655424                                                                                                 │ custom-subnet-655424     │ jenkins  │ v1.37.0 │ 14 Oct 25 20:36 UTC │ 14 Oct 25 20:36 UTC │
	│ start   │ -p static-ip-708910 --static-ip=192.168.200.200                                                                         │ static-ip-708910         │ jenkins  │ v1.37.0 │ 14 Oct 25 20:36 UTC │ 14 Oct 25 20:37 UTC │
	│ ip      │ static-ip-708910 ip                                                                                                     │ static-ip-708910         │ jenkins  │ v1.37.0 │ 14 Oct 25 20:37 UTC │ 14 Oct 25 20:37 UTC │
	│ delete  │ -p static-ip-708910                                                                                                     │ static-ip-708910         │ jenkins  │ v1.37.0 │ 14 Oct 25 20:37 UTC │ 14 Oct 25 20:37 UTC │
	│ start   │ -p first-826709 --driver=docker  --container-runtime=crio                                                               │ first-826709             │ jenkins  │ v1.37.0 │ 14 Oct 25 20:37 UTC │                     │
	│ delete  │ -p second-829440                                                                                                        │ second-829440            │ jenkins  │ v1.37.0 │ 14 Oct 25 20:45 UTC │ 14 Oct 25 20:45 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴──────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/14 20:37:16
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1014 20:37:16.801926  522512 out.go:360] Setting OutFile to fd 1 ...
	I1014 20:37:16.802152  522512 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:37:16.802155  522512 out.go:374] Setting ErrFile to fd 2...
	I1014 20:37:16.802159  522512 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:37:16.802380  522512 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-413763/.minikube/bin
	I1014 20:37:16.802897  522512 out.go:368] Setting JSON to false
	I1014 20:37:16.803801  522512 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":11983,"bootTime":1760462254,"procs":193,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1014 20:37:16.803905  522512 start.go:141] virtualization: kvm guest
	I1014 20:37:16.805872  522512 out.go:179] * [first-826709] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1014 20:37:16.807807  522512 out.go:179]   - MINIKUBE_LOCATION=21409
	I1014 20:37:16.807826  522512 notify.go:220] Checking for updates...
	I1014 20:37:16.813294  522512 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 20:37:16.814740  522512 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-413763/kubeconfig
	I1014 20:37:16.816108  522512 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-413763/.minikube
	I1014 20:37:16.817453  522512 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1014 20:37:16.818783  522512 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 20:37:16.820265  522512 driver.go:421] Setting default libvirt URI to qemu:///system
	I1014 20:37:16.843892  522512 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1014 20:37:16.844057  522512 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 20:37:16.913812  522512 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-14 20:37:16.900075224 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1014 20:37:16.913911  522512 docker.go:318] overlay module found
	I1014 20:37:16.915772  522512 out.go:179] * Using the docker driver based on user configuration
	I1014 20:37:16.916973  522512 start.go:305] selected driver: docker
	I1014 20:37:16.916989  522512 start.go:925] validating driver "docker" against <nil>
	I1014 20:37:16.916999  522512 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 20:37:16.917092  522512 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 20:37:16.981378  522512 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-14 20:37:16.968561511 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1014 20:37:16.981573  522512 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1014 20:37:16.982115  522512 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1014 20:37:16.982255  522512 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1014 20:37:16.983905  522512 out.go:179] * Using Docker driver with root privileges
	I1014 20:37:16.985160  522512 cni.go:84] Creating CNI manager for ""
	I1014 20:37:16.985217  522512 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1014 20:37:16.985223  522512 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1014 20:37:16.985298  522512 start.go:349] cluster config:
	{Name:first-826709 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:first-826709 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 20:37:16.986671  522512 out.go:179] * Starting "first-826709" primary control-plane node in "first-826709" cluster
	I1014 20:37:16.987973  522512 cache.go:123] Beginning downloading kic base image for docker with crio
	I1014 20:37:16.989331  522512 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1014 20:37:16.990516  522512 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 20:37:16.990562  522512 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-413763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1014 20:37:16.990571  522512 cache.go:58] Caching tarball of preloaded images
	I1014 20:37:16.990664  522512 preload.go:233] Found /home/jenkins/minikube-integration/21409-413763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1014 20:37:16.990660  522512 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1014 20:37:16.990671  522512 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1014 20:37:16.990995  522512 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/first-826709/config.json ...
	I1014 20:37:16.991018  522512 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/first-826709/config.json: {Name:mk555c74414af69149c5971b998b682d6313ff3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:37:17.012177  522512 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1014 20:37:17.012190  522512 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1014 20:37:17.012204  522512 cache.go:232] Successfully downloaded all kic artifacts
	I1014 20:37:17.012237  522512 start.go:360] acquireMachinesLock for first-826709: {Name:mkf1e2db4c32c1464b7ae1e2470e69405fa1879c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 20:37:17.012336  522512 start.go:364] duration metric: took 84.488µs to acquireMachinesLock for "first-826709"
	I1014 20:37:17.012357  522512 start.go:93] Provisioning new machine with config: &{Name:first-826709 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:first-826709 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 20:37:17.012414  522512 start.go:125] createHost starting for "" (driver="docker")
	I1014 20:37:17.014420  522512 out.go:252] * Creating docker container (CPUs=2, Memory=8000MB) ...
	I1014 20:37:17.014614  522512 start.go:159] libmachine.API.Create for "first-826709" (driver="docker")
	I1014 20:37:17.014634  522512 client.go:168] LocalClient.Create starting
	I1014 20:37:17.014702  522512 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem
	I1014 20:37:17.014729  522512 main.go:141] libmachine: Decoding PEM data...
	I1014 20:37:17.014739  522512 main.go:141] libmachine: Parsing certificate...
	I1014 20:37:17.014822  522512 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem
	I1014 20:37:17.014840  522512 main.go:141] libmachine: Decoding PEM data...
	I1014 20:37:17.014847  522512 main.go:141] libmachine: Parsing certificate...
	I1014 20:37:17.015208  522512 cli_runner.go:164] Run: docker network inspect first-826709 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1014 20:37:17.034356  522512 cli_runner.go:211] docker network inspect first-826709 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1014 20:37:17.034447  522512 network_create.go:284] running [docker network inspect first-826709] to gather additional debugging logs...
	I1014 20:37:17.034468  522512 cli_runner.go:164] Run: docker network inspect first-826709
	W1014 20:37:17.052300  522512 cli_runner.go:211] docker network inspect first-826709 returned with exit code 1
	I1014 20:37:17.052325  522512 network_create.go:287] error running [docker network inspect first-826709]: docker network inspect first-826709: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network first-826709 not found
	I1014 20:37:17.052335  522512 network_create.go:289] output of [docker network inspect first-826709]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network first-826709 not found
	
	** /stderr **
	I1014 20:37:17.052448  522512 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1014 20:37:17.069844  522512 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-aca708c3f0f5 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:c2:37:09:24:ed:7a} reservation:<nil>}
	I1014 20:37:17.070219  522512 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001980db0}
	I1014 20:37:17.070241  522512 network_create.go:124] attempt to create docker network first-826709 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1014 20:37:17.070286  522512 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=first-826709 first-826709
	I1014 20:37:17.127496  522512 network_create.go:108] docker network first-826709 192.168.58.0/24 created
	I1014 20:37:17.127526  522512 kic.go:121] calculated static IP "192.168.58.2" for the "first-826709" container
	I1014 20:37:17.127586  522512 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1014 20:37:17.143586  522512 cli_runner.go:164] Run: docker volume create first-826709 --label name.minikube.sigs.k8s.io=first-826709 --label created_by.minikube.sigs.k8s.io=true
	I1014 20:37:17.161938  522512 oci.go:103] Successfully created a docker volume first-826709
	I1014 20:37:17.162022  522512 cli_runner.go:164] Run: docker run --rm --name first-826709-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=first-826709 --entrypoint /usr/bin/test -v first-826709:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1014 20:37:17.540994  522512 oci.go:107] Successfully prepared a docker volume first-826709
	I1014 20:37:17.541038  522512 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 20:37:17.541064  522512 kic.go:194] Starting extracting preloaded images to volume ...
	I1014 20:37:17.541138  522512 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21409-413763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v first-826709:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	I1014 20:37:21.962445  522512 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21409-413763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v first-826709:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (4.421259051s)
	I1014 20:37:21.962473  522512 kic.go:203] duration metric: took 4.421405523s to extract preloaded images to volume ...
	W1014 20:37:21.962572  522512 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1014 20:37:21.962595  522512 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1014 20:37:21.962636  522512 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1014 20:37:22.022839  522512 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname first-826709 --name first-826709 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=first-826709 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=first-826709 --network first-826709 --ip 192.168.58.2 --volume first-826709:/var --security-opt apparmor=unconfined --memory=8000mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1014 20:37:22.297416  522512 cli_runner.go:164] Run: docker container inspect first-826709 --format={{.State.Running}}
	I1014 20:37:22.316319  522512 cli_runner.go:164] Run: docker container inspect first-826709 --format={{.State.Status}}
	I1014 20:37:22.335398  522512 cli_runner.go:164] Run: docker exec first-826709 stat /var/lib/dpkg/alternatives/iptables
	I1014 20:37:22.384512  522512 oci.go:144] the created container "first-826709" has a running status.
	I1014 20:37:22.384538  522512 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21409-413763/.minikube/machines/first-826709/id_rsa...
	I1014 20:37:22.501379  522512 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21409-413763/.minikube/machines/first-826709/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1014 20:37:22.535550  522512 cli_runner.go:164] Run: docker container inspect first-826709 --format={{.State.Status}}
	I1014 20:37:22.558115  522512 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1014 20:37:22.558128  522512 kic_runner.go:114] Args: [docker exec --privileged first-826709 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1014 20:37:22.610065  522512 cli_runner.go:164] Run: docker container inspect first-826709 --format={{.State.Status}}
	I1014 20:37:22.632413  522512 machine.go:93] provisionDockerMachine start ...
	I1014 20:37:22.632503  522512 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" first-826709
	I1014 20:37:22.651545  522512 main.go:141] libmachine: Using SSH client type: native
	I1014 20:37:22.651923  522512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32948 <nil> <nil>}
	I1014 20:37:22.651945  522512 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 20:37:22.805312  522512 main.go:141] libmachine: SSH cmd err, output: <nil>: first-826709
	
	I1014 20:37:22.805331  522512 ubuntu.go:182] provisioning hostname "first-826709"
	I1014 20:37:22.805390  522512 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" first-826709
	I1014 20:37:22.824034  522512 main.go:141] libmachine: Using SSH client type: native
	I1014 20:37:22.824394  522512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32948 <nil> <nil>}
	I1014 20:37:22.824410  522512 main.go:141] libmachine: About to run SSH command:
	sudo hostname first-826709 && echo "first-826709" | sudo tee /etc/hostname
	I1014 20:37:22.985449  522512 main.go:141] libmachine: SSH cmd err, output: <nil>: first-826709
	
	I1014 20:37:22.985513  522512 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" first-826709
	I1014 20:37:23.003608  522512 main.go:141] libmachine: Using SSH client type: native
	I1014 20:37:23.003847  522512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32948 <nil> <nil>}
	I1014 20:37:23.003861  522512 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfirst-826709' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 first-826709/g' /etc/hosts;
				else 
					echo '127.0.1.1 first-826709' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 20:37:23.151023  522512 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 20:37:23.151047  522512 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-413763/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-413763/.minikube}
	I1014 20:37:23.151066  522512 ubuntu.go:190] setting up certificates
	I1014 20:37:23.151077  522512 provision.go:84] configureAuth start
	I1014 20:37:23.151130  522512 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" first-826709
	I1014 20:37:23.169101  522512 provision.go:143] copyHostCerts
	I1014 20:37:23.169159  522512 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem, removing ...
	I1014 20:37:23.169167  522512 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem
	I1014 20:37:23.169234  522512 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-413763/.minikube/cert.pem (1123 bytes)
	I1014 20:37:23.169318  522512 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem, removing ...
	I1014 20:37:23.169321  522512 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem
	I1014 20:37:23.169348  522512 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-413763/.minikube/key.pem (1675 bytes)
	I1014 20:37:23.169397  522512 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem, removing ...
	I1014 20:37:23.169400  522512 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem
	I1014 20:37:23.169421  522512 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-413763/.minikube/ca.pem (1078 bytes)
	I1014 20:37:23.169467  522512 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca-key.pem org=jenkins.first-826709 san=[127.0.0.1 192.168.58.2 first-826709 localhost minikube]
	I1014 20:37:23.485788  522512 provision.go:177] copyRemoteCerts
	I1014 20:37:23.485838  522512 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 20:37:23.485878  522512 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" first-826709
	I1014 20:37:23.503948  522512 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32948 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/first-826709/id_rsa Username:docker}
	I1014 20:37:23.608179  522512 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 20:37:23.628162  522512 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1014 20:37:23.645647  522512 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1014 20:37:23.663950  522512 provision.go:87] duration metric: took 512.857726ms to configureAuth
	I1014 20:37:23.663977  522512 ubuntu.go:206] setting minikube options for container-runtime
	I1014 20:37:23.664150  522512 config.go:182] Loaded profile config "first-826709": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:37:23.664249  522512 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" first-826709
	I1014 20:37:23.681811  522512 main.go:141] libmachine: Using SSH client type: native
	I1014 20:37:23.682017  522512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32948 <nil> <nil>}
	I1014 20:37:23.682026  522512 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 20:37:23.946923  522512 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 20:37:23.946947  522512 machine.go:96] duration metric: took 1.314513294s to provisionDockerMachine
	I1014 20:37:23.946959  522512 client.go:171] duration metric: took 6.932320182s to LocalClient.Create
	I1014 20:37:23.946976  522512 start.go:167] duration metric: took 6.932363023s to libmachine.API.Create "first-826709"
	I1014 20:37:23.946984  522512 start.go:293] postStartSetup for "first-826709" (driver="docker")
	I1014 20:37:23.946996  522512 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 20:37:23.947071  522512 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 20:37:23.947118  522512 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" first-826709
	I1014 20:37:23.966577  522512 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32948 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/first-826709/id_rsa Username:docker}
	I1014 20:37:24.072285  522512 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 20:37:24.075945  522512 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1014 20:37:24.075962  522512 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1014 20:37:24.075972  522512 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-413763/.minikube/addons for local assets ...
	I1014 20:37:24.076026  522512 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-413763/.minikube/files for local assets ...
	I1014 20:37:24.076093  522512 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem -> 4173732.pem in /etc/ssl/certs
	I1014 20:37:24.076173  522512 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 20:37:24.083818  522512 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem --> /etc/ssl/certs/4173732.pem (1708 bytes)
	I1014 20:37:24.104852  522512 start.go:296] duration metric: took 157.852553ms for postStartSetup
	I1014 20:37:24.105158  522512 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" first-826709
	I1014 20:37:24.122915  522512 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/first-826709/config.json ...
	I1014 20:37:24.123175  522512 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1014 20:37:24.123213  522512 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" first-826709
	I1014 20:37:24.141529  522512 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32948 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/first-826709/id_rsa Username:docker}
	I1014 20:37:24.243268  522512 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1014 20:37:24.248075  522512 start.go:128] duration metric: took 7.235642355s to createHost
	I1014 20:37:24.248095  522512 start.go:83] releasing machines lock for "first-826709", held for 7.235751638s
	I1014 20:37:24.248168  522512 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" first-826709
	I1014 20:37:24.265718  522512 ssh_runner.go:195] Run: cat /version.json
	I1014 20:37:24.265797  522512 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" first-826709
	I1014 20:37:24.265803  522512 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 20:37:24.265866  522512 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" first-826709
	I1014 20:37:24.284570  522512 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32948 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/first-826709/id_rsa Username:docker}
	I1014 20:37:24.285634  522512 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32948 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/first-826709/id_rsa Username:docker}
	I1014 20:37:24.441017  522512 ssh_runner.go:195] Run: systemctl --version
	I1014 20:37:24.448163  522512 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 20:37:24.484625  522512 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 20:37:24.489555  522512 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 20:37:24.489608  522512 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 20:37:24.516093  522512 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1014 20:37:24.516110  522512 start.go:495] detecting cgroup driver to use...
	I1014 20:37:24.516140  522512 detect.go:190] detected "systemd" cgroup driver on host os
	I1014 20:37:24.516178  522512 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 20:37:24.532781  522512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 20:37:24.545431  522512 docker.go:218] disabling cri-docker service (if available) ...
	I1014 20:37:24.545491  522512 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 20:37:24.562821  522512 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 20:37:24.580381  522512 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 20:37:24.663173  522512 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 20:37:24.753276  522512 docker.go:234] disabling docker service ...
	I1014 20:37:24.753327  522512 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 20:37:24.773699  522512 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 20:37:24.786605  522512 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 20:37:24.869268  522512 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 20:37:24.951478  522512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 20:37:24.964734  522512 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 20:37:24.979608  522512 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1014 20:37:24.979673  522512 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:37:24.991388  522512 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1014 20:37:24.991445  522512 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:37:25.002353  522512 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:37:25.012191  522512 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:37:25.021723  522512 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 20:37:25.030897  522512 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:37:25.041236  522512 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:37:25.055550  522512 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:37:25.064986  522512 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 20:37:25.072647  522512 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 20:37:25.079980  522512 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 20:37:25.165100  522512 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 20:37:25.277140  522512 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 20:37:25.277195  522512 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 20:37:25.281400  522512 start.go:563] Will wait 60s for crictl version
	I1014 20:37:25.281447  522512 ssh_runner.go:195] Run: which crictl
	I1014 20:37:25.285162  522512 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1014 20:37:25.311017  522512 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1014 20:37:25.311081  522512 ssh_runner.go:195] Run: crio --version
	I1014 20:37:25.340995  522512 ssh_runner.go:195] Run: crio --version
	I1014 20:37:25.372923  522512 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1014 20:37:25.374287  522512 cli_runner.go:164] Run: docker network inspect first-826709 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1014 20:37:25.391348  522512 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I1014 20:37:25.396388  522512 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 20:37:25.406877  522512 kubeadm.go:883] updating cluster {Name:first-826709 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:first-826709 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s} ...
	I1014 20:37:25.407007  522512 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 20:37:25.407056  522512 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 20:37:25.439106  522512 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 20:37:25.439118  522512 crio.go:433] Images already preloaded, skipping extraction
	I1014 20:37:25.439164  522512 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 20:37:25.466924  522512 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 20:37:25.466938  522512 cache_images.go:85] Images are preloaded, skipping loading
	I1014 20:37:25.466945  522512 kubeadm.go:934] updating node { 192.168.58.2 8443 v1.34.1 crio true true} ...
	I1014 20:37:25.467033  522512 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=first-826709 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:first-826709 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 20:37:25.467104  522512 ssh_runner.go:195] Run: crio config
	I1014 20:37:25.517275  522512 cni.go:84] Creating CNI manager for ""
	I1014 20:37:25.517289  522512 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1014 20:37:25.517307  522512 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1014 20:37:25.517328  522512 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:first-826709 NodeName:first-826709 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1014 20:37:25.517435  522512 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "first-826709"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.58.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 20:37:25.517496  522512 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1014 20:37:25.526388  522512 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 20:37:25.526453  522512 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1014 20:37:25.534495  522512 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1014 20:37:25.547313  522512 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 20:37:25.563193  522512 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1014 20:37:25.576617  522512 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I1014 20:37:25.580531  522512 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 20:37:25.591226  522512 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 20:37:25.672073  522512 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 20:37:25.696480  522512 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/first-826709 for IP: 192.168.58.2
	I1014 20:37:25.696497  522512 certs.go:195] generating shared ca certs ...
	I1014 20:37:25.696516  522512 certs.go:227] acquiring lock for ca certs: {Name:mk332cc83f1e1747534fc81e896bed94f24941d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:37:25.696697  522512 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-413763/.minikube/ca.key
	I1014 20:37:25.696745  522512 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.key
	I1014 20:37:25.696770  522512 certs.go:257] generating profile certs ...
	I1014 20:37:25.696837  522512 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/first-826709/client.key
	I1014 20:37:25.696848  522512 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/first-826709/client.crt with IP's: []
	I1014 20:37:26.084083  522512 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/first-826709/client.crt ...
	I1014 20:37:26.084109  522512 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/first-826709/client.crt: {Name:mk32b92ccebe1531d16146b3511ee2c7008b727a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:37:26.084314  522512 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/first-826709/client.key ...
	I1014 20:37:26.084320  522512 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/first-826709/client.key: {Name:mk2ba10aeff4988e450f5271413ba198e7beeb81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:37:26.084402  522512 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/first-826709/apiserver.key.3950a7cb
	I1014 20:37:26.084412  522512 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/first-826709/apiserver.crt.3950a7cb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.58.2]
	I1014 20:37:26.427639  522512 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/first-826709/apiserver.crt.3950a7cb ...
	I1014 20:37:26.427658  522512 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/first-826709/apiserver.crt.3950a7cb: {Name:mk47e26d3dfe1a66f8d5725977bc8ade0dc8620f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:37:26.427850  522512 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/first-826709/apiserver.key.3950a7cb ...
	I1014 20:37:26.427859  522512 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/first-826709/apiserver.key.3950a7cb: {Name:mkf79af960a107cfbf9b4d2832504cfceb22362b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:37:26.427938  522512 certs.go:382] copying /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/first-826709/apiserver.crt.3950a7cb -> /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/first-826709/apiserver.crt
	I1014 20:37:26.428021  522512 certs.go:386] copying /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/first-826709/apiserver.key.3950a7cb -> /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/first-826709/apiserver.key
	I1014 20:37:26.428068  522512 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/first-826709/proxy-client.key
	I1014 20:37:26.428077  522512 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/first-826709/proxy-client.crt with IP's: []
	I1014 20:37:26.947575  522512 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/first-826709/proxy-client.crt ...
	I1014 20:37:26.947594  522512 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/first-826709/proxy-client.crt: {Name:mkdb22e2b265f6287bbae223be29abe208a2ea1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:37:26.947802  522512 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/first-826709/proxy-client.key ...
	I1014 20:37:26.947813  522512 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/first-826709/proxy-client.key: {Name:mkcf5bffda7ec1c1cf74dbe433d0c1092977cd5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:37:26.947998  522512 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/417373.pem (1338 bytes)
	W1014 20:37:26.948029  522512 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-413763/.minikube/certs/417373_empty.pem, impossibly tiny 0 bytes
	I1014 20:37:26.948035  522512 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca-key.pem (1675 bytes)
	I1014 20:37:26.948059  522512 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/ca.pem (1078 bytes)
	I1014 20:37:26.948077  522512 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/cert.pem (1123 bytes)
	I1014 20:37:26.948094  522512 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/certs/key.pem (1675 bytes)
	I1014 20:37:26.948126  522512 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem (1708 bytes)
	I1014 20:37:26.948749  522512 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 20:37:26.968318  522512 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1014 20:37:26.986971  522512 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 20:37:27.005791  522512 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1014 20:37:27.024618  522512 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/first-826709/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1014 20:37:27.043262  522512 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/first-826709/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1014 20:37:27.061476  522512 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/first-826709/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 20:37:27.079308  522512 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/first-826709/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1014 20:37:27.096923  522512 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/ssl/certs/4173732.pem --> /usr/share/ca-certificates/4173732.pem (1708 bytes)
	I1014 20:37:27.117057  522512 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 20:37:27.136943  522512 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-413763/.minikube/certs/417373.pem --> /usr/share/ca-certificates/417373.pem (1338 bytes)
	I1014 20:37:27.156204  522512 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 20:37:27.169900  522512 ssh_runner.go:195] Run: openssl version
	I1014 20:37:27.176682  522512 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4173732.pem && ln -fs /usr/share/ca-certificates/4173732.pem /etc/ssl/certs/4173732.pem"
	I1014 20:37:27.186139  522512 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4173732.pem
	I1014 20:37:27.190401  522512 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 19:32 /usr/share/ca-certificates/4173732.pem
	I1014 20:37:27.190454  522512 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4173732.pem
	I1014 20:37:27.225380  522512 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4173732.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 20:37:27.234935  522512 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 20:37:27.244236  522512 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:37:27.248633  522512 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 19:15 /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:37:27.248695  522512 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:37:27.286635  522512 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 20:37:27.297061  522512 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/417373.pem && ln -fs /usr/share/ca-certificates/417373.pem /etc/ssl/certs/417373.pem"
	I1014 20:37:27.306365  522512 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/417373.pem
	I1014 20:37:27.310552  522512 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 19:32 /usr/share/ca-certificates/417373.pem
	I1014 20:37:27.310599  522512 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/417373.pem
	I1014 20:37:27.346077  522512 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/417373.pem /etc/ssl/certs/51391683.0"
	I1014 20:37:27.356510  522512 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 20:37:27.360931  522512 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1014 20:37:27.360980  522512 kubeadm.go:400] StartCluster: {Name:first-826709 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:first-826709 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Aut
oPauseInterval:1m0s}
	I1014 20:37:27.361048  522512 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 20:37:27.361094  522512 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 20:37:27.391290  522512 cri.go:89] found id: ""
	I1014 20:37:27.391357  522512 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 20:37:27.400687  522512 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 20:37:27.409248  522512 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1014 20:37:27.409300  522512 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 20:37:27.417788  522512 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 20:37:27.417797  522512 kubeadm.go:157] found existing configuration files:
	
	I1014 20:37:27.417843  522512 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 20:37:27.426001  522512 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 20:37:27.426047  522512 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 20:37:27.433611  522512 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 20:37:27.441388  522512 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 20:37:27.441436  522512 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 20:37:27.449547  522512 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 20:37:27.457787  522512 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 20:37:27.457836  522512 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 20:37:27.465605  522512 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 20:37:27.473733  522512 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 20:37:27.473809  522512 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 20:37:27.481546  522512 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1014 20:37:27.519596  522512 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1014 20:37:27.519649  522512 kubeadm.go:318] [preflight] Running pre-flight checks
	I1014 20:37:27.540114  522512 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1014 20:37:27.540203  522512 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1014 20:37:27.540243  522512 kubeadm.go:318] OS: Linux
	I1014 20:37:27.540302  522512 kubeadm.go:318] CGROUPS_CPU: enabled
	I1014 20:37:27.540363  522512 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1014 20:37:27.540423  522512 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1014 20:37:27.540474  522512 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1014 20:37:27.540522  522512 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1014 20:37:27.540569  522512 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1014 20:37:27.540629  522512 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1014 20:37:27.540681  522512 kubeadm.go:318] CGROUPS_IO: enabled
	I1014 20:37:27.601583  522512 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1014 20:37:27.601860  522512 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1014 20:37:27.601989  522512 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1014 20:37:27.610462  522512 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1014 20:37:27.613615  522512 out.go:252]   - Generating certificates and keys ...
	I1014 20:37:27.613777  522512 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1014 20:37:27.613885  522512 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1014 20:37:28.114335  522512 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1014 20:37:28.631263  522512 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1014 20:37:29.129150  522512 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1014 20:37:29.251600  522512 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1014 20:37:29.703419  522512 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1014 20:37:29.703556  522512 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [first-826709 localhost] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1014 20:37:30.011254  522512 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1014 20:37:30.011416  522512 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [first-826709 localhost] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1014 20:37:30.100342  522512 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1014 20:37:30.545504  522512 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1014 20:37:30.617206  522512 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1014 20:37:30.617273  522512 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1014 20:37:30.697348  522512 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1014 20:37:31.267385  522512 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1014 20:37:31.503167  522512 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1014 20:37:31.629407  522512 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1014 20:37:31.792356  522512 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1014 20:37:31.792834  522512 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1014 20:37:31.797559  522512 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1014 20:37:31.800106  522512 out.go:252]   - Booting up control plane ...
	I1014 20:37:31.800242  522512 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1014 20:37:31.800322  522512 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1014 20:37:31.800908  522512 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1014 20:37:31.815479  522512 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1014 20:37:31.815595  522512 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1014 20:37:31.822543  522512 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1014 20:37:31.822671  522512 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1014 20:37:31.822823  522512 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1014 20:37:31.922122  522512 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1014 20:37:31.922257  522512 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1014 20:37:32.423930  522512 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.939708ms
	I1014 20:37:32.426878  522512 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1014 20:37:32.426970  522512 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.58.2:8443/livez
	I1014 20:37:32.427042  522512 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1014 20:37:32.427104  522512 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1014 20:41:32.427650  522512 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000436069s
	I1014 20:41:32.427955  522512 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000602955s
	I1014 20:41:32.428209  522512 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000679186s
	I1014 20:41:32.428225  522512 kubeadm.go:318] 
	I1014 20:41:32.428398  522512 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1014 20:41:32.428484  522512 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1014 20:41:32.428577  522512 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1014 20:41:32.428693  522512 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1014 20:41:32.428802  522512 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1014 20:41:32.428923  522512 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1014 20:41:32.428927  522512 kubeadm.go:318] 
	I1014 20:41:32.432369  522512 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1014 20:41:32.432522  522512 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1014 20:41:32.433390  522512 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.58.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.58.2:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1014 20:41:32.433498  522512 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	W1014 20:41:32.433741  522512 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [first-826709 localhost] and IPs [192.168.58.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [first-826709 localhost] and IPs [192.168.58.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.939708ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.58.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000436069s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000602955s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000679186s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.58.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.58.2:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1014 20:41:32.433851  522512 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1014 20:41:32.885023  522512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 20:41:32.898911  522512 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1014 20:41:32.898967  522512 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 20:41:32.908797  522512 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 20:41:32.908810  522512 kubeadm.go:157] found existing configuration files:
	
	I1014 20:41:32.908859  522512 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 20:41:32.917353  522512 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 20:41:32.917411  522512 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 20:41:32.925646  522512 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 20:41:32.933679  522512 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 20:41:32.933728  522512 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 20:41:32.941841  522512 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 20:41:32.950473  522512 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 20:41:32.950522  522512 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 20:41:32.958461  522512 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 20:41:32.966610  522512 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 20:41:32.966664  522512 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 20:41:32.974968  522512 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1014 20:41:33.013151  522512 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1014 20:41:33.013225  522512 kubeadm.go:318] [preflight] Running pre-flight checks
	I1014 20:41:33.032952  522512 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1014 20:41:33.033024  522512 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1014 20:41:33.033063  522512 kubeadm.go:318] OS: Linux
	I1014 20:41:33.033115  522512 kubeadm.go:318] CGROUPS_CPU: enabled
	I1014 20:41:33.033169  522512 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1014 20:41:33.033223  522512 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1014 20:41:33.033276  522512 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1014 20:41:33.033332  522512 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1014 20:41:33.033386  522512 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1014 20:41:33.033446  522512 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1014 20:41:33.033498  522512 kubeadm.go:318] CGROUPS_IO: enabled
	I1014 20:41:33.094400  522512 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1014 20:41:33.094505  522512 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1014 20:41:33.094610  522512 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1014 20:41:33.101599  522512 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1014 20:41:33.105299  522512 out.go:252]   - Generating certificates and keys ...
	I1014 20:41:33.105426  522512 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1014 20:41:33.105517  522512 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1014 20:41:33.105646  522512 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1014 20:41:33.105727  522512 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1014 20:41:33.105838  522512 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1014 20:41:33.105901  522512 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1014 20:41:33.105990  522512 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1014 20:41:33.106060  522512 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1014 20:41:33.106130  522512 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1014 20:41:33.106217  522512 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1014 20:41:33.106251  522512 kubeadm.go:318] [certs] Using the existing "sa" key
	I1014 20:41:33.106302  522512 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1014 20:41:33.681281  522512 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1014 20:41:34.433994  522512 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1014 20:41:34.484319  522512 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1014 20:41:34.912996  522512 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1014 20:41:35.121300  522512 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1014 20:41:35.121914  522512 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1014 20:41:35.124061  522512 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1014 20:41:35.127141  522512 out.go:252]   - Booting up control plane ...
	I1014 20:41:35.127263  522512 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1014 20:41:35.127350  522512 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1014 20:41:35.127440  522512 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1014 20:41:35.141826  522512 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1014 20:41:35.141944  522512 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1014 20:41:35.148978  522512 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1014 20:41:35.149200  522512 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1014 20:41:35.149243  522512 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1014 20:41:35.260024  522512 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1014 20:41:35.260142  522512 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1014 20:41:36.261244  522512 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001338386s
	I1014 20:41:36.264175  522512 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1014 20:41:36.264298  522512 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.58.2:8443/livez
	I1014 20:41:36.264423  522512 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1014 20:41:36.264491  522512 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1014 20:45:36.265101  522512 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000580443s
	I1014 20:45:36.265205  522512 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000633905s
	I1014 20:45:36.265272  522512 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000680683s
	I1014 20:45:36.265274  522512 kubeadm.go:318] 
	I1014 20:45:36.265349  522512 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1014 20:45:36.265446  522512 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1014 20:45:36.265544  522512 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1014 20:45:36.265732  522512 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1014 20:45:36.265872  522512 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1014 20:45:36.266015  522512 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1014 20:45:36.266024  522512 kubeadm.go:318] 
	I1014 20:45:36.269059  522512 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1014 20:45:36.269168  522512 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1014 20:45:36.269859  522512 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.58.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1014 20:45:36.269930  522512 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1014 20:45:36.270024  522512 kubeadm.go:402] duration metric: took 8m8.90904578s to StartCluster
	I1014 20:45:36.270088  522512 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 20:45:36.270156  522512 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 20:45:36.298699  522512 cri.go:89] found id: ""
	I1014 20:45:36.298751  522512 logs.go:282] 0 containers: []
	W1014 20:45:36.298774  522512 logs.go:284] No container was found matching "kube-apiserver"
	I1014 20:45:36.298784  522512 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 20:45:36.298846  522512 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 20:45:36.325593  522512 cri.go:89] found id: ""
	I1014 20:45:36.325614  522512 logs.go:282] 0 containers: []
	W1014 20:45:36.325624  522512 logs.go:284] No container was found matching "etcd"
	I1014 20:45:36.325631  522512 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 20:45:36.325706  522512 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 20:45:36.353465  522512 cri.go:89] found id: ""
	I1014 20:45:36.353487  522512 logs.go:282] 0 containers: []
	W1014 20:45:36.353495  522512 logs.go:284] No container was found matching "coredns"
	I1014 20:45:36.353503  522512 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 20:45:36.353572  522512 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 20:45:36.380000  522512 cri.go:89] found id: ""
	I1014 20:45:36.380022  522512 logs.go:282] 0 containers: []
	W1014 20:45:36.380032  522512 logs.go:284] No container was found matching "kube-scheduler"
	I1014 20:45:36.380040  522512 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 20:45:36.380106  522512 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 20:45:36.407534  522512 cri.go:89] found id: ""
	I1014 20:45:36.407550  522512 logs.go:282] 0 containers: []
	W1014 20:45:36.407557  522512 logs.go:284] No container was found matching "kube-proxy"
	I1014 20:45:36.407561  522512 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 20:45:36.407608  522512 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 20:45:36.434975  522512 cri.go:89] found id: ""
	I1014 20:45:36.434994  522512 logs.go:282] 0 containers: []
	W1014 20:45:36.435003  522512 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 20:45:36.435008  522512 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 20:45:36.435071  522512 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 20:45:36.462503  522512 cri.go:89] found id: ""
	I1014 20:45:36.462524  522512 logs.go:282] 0 containers: []
	W1014 20:45:36.462534  522512 logs.go:284] No container was found matching "kindnet"
	I1014 20:45:36.462544  522512 logs.go:123] Gathering logs for dmesg ...
	I1014 20:45:36.462558  522512 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 20:45:36.483636  522512 logs.go:123] Gathering logs for describe nodes ...
	I1014 20:45:36.483665  522512 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 20:45:36.544376  522512 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 20:45:36.536627    2415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:45:36.537238    2415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:45:36.538825    2415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:45:36.539302    2415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:45:36.540821    2415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1014 20:45:36.536627    2415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:45:36.537238    2415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:45:36.538825    2415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:45:36.539302    2415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:45:36.540821    2415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 20:45:36.544406  522512 logs.go:123] Gathering logs for CRI-O ...
	I1014 20:45:36.544420  522512 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 20:45:36.608396  522512 logs.go:123] Gathering logs for container status ...
	I1014 20:45:36.608421  522512 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 20:45:36.638306  522512 logs.go:123] Gathering logs for kubelet ...
	I1014 20:45:36.638329  522512 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1014 20:45:36.704649  522512 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001338386s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.58.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000580443s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000633905s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000680683s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.58.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1014 20:45:36.704713  522512 out.go:285] * 
	W1014 20:45:36.704859  522512 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001338386s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.58.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000580443s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000633905s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000680683s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.58.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1014 20:45:36.704883  522512 out.go:285] * 
	W1014 20:45:36.706872  522512 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 20:45:36.711274  522512 out.go:203] 
	W1014 20:45:36.712870  522512 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001338386s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.58.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000580443s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000633905s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000680683s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.58.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1014 20:45:36.712911  522512 out.go:285] * 
	I1014 20:45:36.714337  522512 out.go:203] 
	
	
	==> CRI-O <==
	Oct 14 20:45:28 first-826709 crio[779]: time="2025-10-14T20:45:28.961396421Z" level=info msg="createCtr: deleting container 527a848732ddbc7eecd95314ef811651deeabf28dd6e47674dddb2af3f6905aa from storage" id=1930c59a-2f46-48ec-949b-f7a5be0bb426 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:45:28 first-826709 crio[779]: time="2025-10-14T20:45:28.965045212Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-first-826709_kube-system_8002b1a02644aad2ad06bf9d91afb893_0" id=9ef1781c-df5f-433a-b769-c317707ecc18 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:45:28 first-826709 crio[779]: time="2025-10-14T20:45:28.965430211Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-first-826709_kube-system_64cc28c5c25d333af973efcd587fe6fe_0" id=1930c59a-2f46-48ec-949b-f7a5be0bb426 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:45:29 first-826709 crio[779]: time="2025-10-14T20:45:29.928213978Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=1411383a-414f-4ece-83c2-f1e1a7775aa3 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 20:45:29 first-826709 crio[779]: time="2025-10-14T20:45:29.929226155Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=13b7647b-e684-4cbb-be1b-ad0c7f1428e4 name=/runtime.v1.ImageService/ImageStatus
	Oct 14 20:45:29 first-826709 crio[779]: time="2025-10-14T20:45:29.930234121Z" level=info msg="Creating container: kube-system/etcd-first-826709/etcd" id=9429d985-dbee-42e1-a770-615009fd4059 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:45:29 first-826709 crio[779]: time="2025-10-14T20:45:29.930468645Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:45:29 first-826709 crio[779]: time="2025-10-14T20:45:29.935090882Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:45:29 first-826709 crio[779]: time="2025-10-14T20:45:29.935562667Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:45:29 first-826709 crio[779]: time="2025-10-14T20:45:29.95095345Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=9429d985-dbee-42e1-a770-615009fd4059 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:45:29 first-826709 crio[779]: time="2025-10-14T20:45:29.95236023Z" level=info msg="createCtr: deleting container ID 9f73a76cb609256abcee2bffa684a213d7c0d956d3ffde3f4926dc90664737a3 from idIndex" id=9429d985-dbee-42e1-a770-615009fd4059 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:45:29 first-826709 crio[779]: time="2025-10-14T20:45:29.952398203Z" level=info msg="createCtr: removing container 9f73a76cb609256abcee2bffa684a213d7c0d956d3ffde3f4926dc90664737a3" id=9429d985-dbee-42e1-a770-615009fd4059 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:45:29 first-826709 crio[779]: time="2025-10-14T20:45:29.952431849Z" level=info msg="createCtr: deleting container 9f73a76cb609256abcee2bffa684a213d7c0d956d3ffde3f4926dc90664737a3 from storage" id=9429d985-dbee-42e1-a770-615009fd4059 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:45:29 first-826709 crio[779]: time="2025-10-14T20:45:29.95466939Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-first-826709_kube-system_66fcc3918be76f66aa985ea43161a185_0" id=9429d985-dbee-42e1-a770-615009fd4059 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:45:31 first-826709 crio[779]: time="2025-10-14T20:45:31.928138336Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=7b7fc5c2-da04-4b43-8973-2799ac1385cb name=/runtime.v1.ImageService/ImageStatus
	Oct 14 20:45:31 first-826709 crio[779]: time="2025-10-14T20:45:31.929250852Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=b25c0901-2662-4f36-a836-e1b8ae91b45b name=/runtime.v1.ImageService/ImageStatus
	Oct 14 20:45:31 first-826709 crio[779]: time="2025-10-14T20:45:31.930191945Z" level=info msg="Creating container: kube-system/kube-scheduler-first-826709/kube-scheduler" id=47f7114b-428c-4917-980d-42bbad16f4c6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:45:31 first-826709 crio[779]: time="2025-10-14T20:45:31.930407061Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:45:31 first-826709 crio[779]: time="2025-10-14T20:45:31.93381266Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:45:31 first-826709 crio[779]: time="2025-10-14T20:45:31.934230697Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 14 20:45:31 first-826709 crio[779]: time="2025-10-14T20:45:31.951280052Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=47f7114b-428c-4917-980d-42bbad16f4c6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:45:31 first-826709 crio[779]: time="2025-10-14T20:45:31.952820873Z" level=info msg="createCtr: deleting container ID f49831e613693d92f834da1f4da2e6d503f5bdf69875fe3b7e2a829ebec21aad from idIndex" id=47f7114b-428c-4917-980d-42bbad16f4c6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:45:31 first-826709 crio[779]: time="2025-10-14T20:45:31.952870635Z" level=info msg="createCtr: removing container f49831e613693d92f834da1f4da2e6d503f5bdf69875fe3b7e2a829ebec21aad" id=47f7114b-428c-4917-980d-42bbad16f4c6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:45:31 first-826709 crio[779]: time="2025-10-14T20:45:31.952915027Z" level=info msg="createCtr: deleting container f49831e613693d92f834da1f4da2e6d503f5bdf69875fe3b7e2a829ebec21aad from storage" id=47f7114b-428c-4917-980d-42bbad16f4c6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 20:45:31 first-826709 crio[779]: time="2025-10-14T20:45:31.95514681Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-first-826709_kube-system_6165978416d4ce087cc799e6220fb1f5_0" id=47f7114b-428c-4917-980d-42bbad16f4c6 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1014 20:45:37.896405    2568 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:45:37.897102    2568 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:45:37.898803    2568 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:45:37.899284    2568 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1014 20:45:37.900955    2568 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 91 42 a8 46 f5 08 06
	[ +33.952077] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff aa 17 60 38 7c 63 08 06
	[  +0.961658] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ba 77 81 98 05 a1 08 06
	[  +0.043334] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 8a 2f 57 55 4c d7 08 06
	[  +4.681081] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f6 ac 51 a4 b2 5a 08 06
	[Oct14 19:04] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e6 d1 de e1 85 25 08 06
	[  +0.899784] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ae f2 fe 51 3d 95 08 06
	[  +0.040749] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 82 c6 36 89 b8 4a 08 06
	[  +4.162603] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c6 62 d5 5c 0e ef 08 06
	[ +30.801371] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 06 ae e5 28 c7 39 08 06
	[  +0.817897] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ae 5e 5e 90 62 1a 08 06
	[  +0.046959] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 96 8f 1b bd b9 9f 08 06
	[Oct14 19:05] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 56 59 c3 7c db c4 08 06
	
	
	==> kernel <==
	 20:45:37 up  3:28,  0 user,  load average: 0.10, 0.18, 0.26
	Linux first-826709 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 14 20:45:28 first-826709 kubelet[1801]:         container kube-controller-manager start failed in pod kube-controller-manager-first-826709_kube-system(64cc28c5c25d333af973efcd587fe6fe): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 14 20:45:28 first-826709 kubelet[1801]:  > logger="UnhandledError"
	Oct 14 20:45:28 first-826709 kubelet[1801]: E1014 20:45:28.966944    1801 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-first-826709" podUID="64cc28c5c25d333af973efcd587fe6fe"
	Oct 14 20:45:29 first-826709 kubelet[1801]: E1014 20:45:29.927675    1801 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"first-826709\" not found" node="first-826709"
	Oct 14 20:45:29 first-826709 kubelet[1801]: E1014 20:45:29.955062    1801 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 14 20:45:29 first-826709 kubelet[1801]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 14 20:45:29 first-826709 kubelet[1801]:  > podSandboxID="6714fa7cbb982b2ded3d93c3ed82035e76975de788ac31122b2357a9b625a5c5"
	Oct 14 20:45:29 first-826709 kubelet[1801]: E1014 20:45:29.955190    1801 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 14 20:45:29 first-826709 kubelet[1801]:         container etcd start failed in pod etcd-first-826709_kube-system(66fcc3918be76f66aa985ea43161a185): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 14 20:45:29 first-826709 kubelet[1801]:  > logger="UnhandledError"
	Oct 14 20:45:29 first-826709 kubelet[1801]: E1014 20:45:29.955239    1801 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-first-826709" podUID="66fcc3918be76f66aa985ea43161a185"
	Oct 14 20:45:30 first-826709 kubelet[1801]: E1014 20:45:30.453655    1801 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.58.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.58.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	Oct 14 20:45:31 first-826709 kubelet[1801]: E1014 20:45:31.927507    1801 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"first-826709\" not found" node="first-826709"
	Oct 14 20:45:31 first-826709 kubelet[1801]: E1014 20:45:31.955474    1801 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 14 20:45:31 first-826709 kubelet[1801]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 14 20:45:31 first-826709 kubelet[1801]:  > podSandboxID="3d4adc8ba0ae69bc2fe57760d2bbb1881700f34bcc3c50d207b4dd716459cd6f"
	Oct 14 20:45:31 first-826709 kubelet[1801]: E1014 20:45:31.955595    1801 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 14 20:45:31 first-826709 kubelet[1801]:         container kube-scheduler start failed in pod kube-scheduler-first-826709_kube-system(6165978416d4ce087cc799e6220fb1f5): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 14 20:45:31 first-826709 kubelet[1801]:  > logger="UnhandledError"
	Oct 14 20:45:31 first-826709 kubelet[1801]: E1014 20:45:31.955627    1801 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-first-826709" podUID="6165978416d4ce087cc799e6220fb1f5"
	Oct 14 20:45:32 first-826709 kubelet[1801]: E1014 20:45:32.554847    1801 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.58.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/first-826709?timeout=10s\": dial tcp 192.168.58.2:8443: connect: connection refused" interval="7s"
	Oct 14 20:45:32 first-826709 kubelet[1801]: I1014 20:45:32.710683    1801 kubelet_node_status.go:75] "Attempting to register node" node="first-826709"
	Oct 14 20:45:32 first-826709 kubelet[1801]: E1014 20:45:32.711117    1801 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.58.2:8443/api/v1/nodes\": dial tcp 192.168.58.2:8443: connect: connection refused" node="first-826709"
	Oct 14 20:45:34 first-826709 kubelet[1801]: E1014 20:45:34.142341    1801 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.58.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.58.2:8443: connect: connection refused" event="&Event{ObjectMeta:{first-826709.186e763a0546faf1  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:first-826709,UID:first-826709,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node first-826709 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:first-826709,},FirstTimestamp:2025-10-14 20:41:35.917161201 +0000 UTC m=+0.656755696,LastTimestamp:2025-10-14 20:41:35.917161201 +0000 UTC m=+0.656755696,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:first-826709,}"
	Oct 14 20:45:35 first-826709 kubelet[1801]: E1014 20:45:35.944135    1801 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"first-826709\" not found"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p first-826709 -n first-826709
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p first-826709 -n first-826709: exit status 6 (313.346568ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1014 20:45:38.304286  527954 status.go:458] kubeconfig endpoint: get endpoint: "first-826709" does not appear in /home/jenkins/minikube-integration/21409-413763/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "first-826709" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "first-826709" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-826709
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-826709: (1.93369516s)
--- FAIL: TestMinikubeProfile (503.50s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (7200.071s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-860020
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-860020-m01 --driver=docker  --container-runtime=crio
E1014 21:10:35.888602  417373 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 21:14:12.805312  417373 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
panic: test timed out after 2h0m0s
	running tests:
		TestMultiNode (28m34s)
		TestMultiNode/serial (28m34s)
		TestMultiNode/serial/ValidateNameConflict (4m52s)

                                                
                                                
goroutine 2091 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2484 +0x394
created by time.goFunc
	/usr/local/go/src/time/sleep.go:215 +0x2d

                                                
                                                
goroutine 1 [chan receive, 29 minutes]:
testing.(*T).Run(0xc0005028c0, {0x32044f5?, 0xc0005e9a88?}, 0x3c52d60)
	/usr/local/go/src/testing/testing.go:1859 +0x431
testing.runTests.func1(0xc0005028c0)
	/usr/local/go/src/testing/testing.go:2279 +0x37
testing.tRunner(0xc0005028c0, 0xc0005e9bc8)
	/usr/local/go/src/testing/testing.go:1792 +0xf4
testing.runTests(0xc000810150, {0x5c636c0, 0x2c, 0x2c}, {0xffffffffffffffff?, 0xc000b37930?, 0x5c8bdc0?})
	/usr/local/go/src/testing/testing.go:2277 +0x4b4
testing.(*M).Run(0xc0007ec820)
	/usr/local/go/src/testing/testing.go:2142 +0x64a
k8s.io/minikube/test/integration.TestMain(0xc0007ec820)
	/home/jenkins/workspace/Build_Cross/test/integration/main_test.go:64 +0xdb
main.main()
	_testmain.go:133 +0xa8

                                                
                                                
goroutine 132 [chan receive, 119 minutes]:
testing.(*T).Parallel(0xc0015aa8c0)
	/usr/local/go/src/testing/testing.go:1577 +0x225
k8s.io/minikube/test/integration.MaybeParallel(0xc0015aa8c0)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:500 +0x59
k8s.io/minikube/test/integration.TestOffline(0xc0015aa8c0)
	/home/jenkins/workspace/Build_Cross/test/integration/aab_offline_test.go:32 +0x39
testing.tRunner(0xc0015aa8c0, 0x3c52d78)
	/usr/local/go/src/testing/testing.go:1792 +0xf4
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1851 +0x413

                                                
                                                
goroutine 152 [chan receive, 111 minutes]:
testing.(*T).Parallel(0xc0015abc00)
	/usr/local/go/src/testing/testing.go:1577 +0x225
k8s.io/minikube/test/integration.MaybeParallel(0xc0015abc00)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:500 +0x59
k8s.io/minikube/test/integration.TestForceSystemdEnv(0xc0015abc00)
	/home/jenkins/workspace/Build_Cross/test/integration/docker_test.go:146 +0xb3
testing.tRunner(0xc0015abc00, 0x3c52cb8)
	/usr/local/go/src/testing/testing.go:1792 +0xf4
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1851 +0x413

                                                
                                                
goroutine 154 [chan receive, 111 minutes]:
testing.(*T).Parallel(0xc000484fc0)
	/usr/local/go/src/testing/testing.go:1577 +0x225
k8s.io/minikube/test/integration.MaybeParallel(0xc000484fc0)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:500 +0x59
k8s.io/minikube/test/integration.TestKVMDriverInstallOrUpdate(0xc000484fc0)
	/home/jenkins/workspace/Build_Cross/test/integration/driver_install_or_update_test.go:48 +0xb3
testing.tRunner(0xc000484fc0, 0x3c52d08)
	/usr/local/go/src/testing/testing.go:1792 +0xf4
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1851 +0x413

                                                
                                                
goroutine 148 [chan receive, 111 minutes]:
testing.(*T).Parallel(0xc0015aa700)
	/usr/local/go/src/testing/testing.go:1577 +0x225
k8s.io/minikube/test/integration.MaybeParallel(0xc0015aa700)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:500 +0x59
k8s.io/minikube/test/integration.TestCertOptions(0xc0015aa700)
	/home/jenkins/workspace/Build_Cross/test/integration/cert_options_test.go:36 +0xb3
testing.tRunner(0xc0015aa700, 0x3c52c78)
	/usr/local/go/src/testing/testing.go:1792 +0xf4
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1851 +0x413

                                                
                                                
goroutine 1793 [chan receive, 4 minutes]:
testing.(*T).Run(0xc0028a2fc0, {0x3219090?, 0x4097904?}, 0xc0005f6080)
	/usr/local/go/src/testing/testing.go:1859 +0x431
k8s.io/minikube/test/integration.TestMultiNode.func1(0xc0028a2fc0)
	/home/jenkins/workspace/Build_Cross/test/integration/multinode_test.go:84 +0x17d
testing.tRunner(0xc0028a2fc0, 0xc0009ea7b0)
	/usr/local/go/src/testing/testing.go:1792 +0xf4
created by testing.(*T).Run in goroutine 1835
	/usr/local/go/src/testing/testing.go:1851 +0x413

                                                
                                                
goroutine 151 [chan receive, 111 minutes]:
testing.(*T).Parallel(0xc0015ab340)
	/usr/local/go/src/testing/testing.go:1577 +0x225
k8s.io/minikube/test/integration.MaybeParallel(0xc0015ab340)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:500 +0x59
k8s.io/minikube/test/integration.TestForceSystemdFlag(0xc0015ab340)
	/home/jenkins/workspace/Build_Cross/test/integration/docker_test.go:83 +0xb3
testing.tRunner(0xc0015ab340, 0x3c52cc0)
	/usr/local/go/src/testing/testing.go:1792 +0xf4
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1851 +0x413

                                                
                                                
goroutine 149 [chan receive, 111 minutes]:
testing.(*T).Parallel(0xc0015aac40)
	/usr/local/go/src/testing/testing.go:1577 +0x225
k8s.io/minikube/test/integration.MaybeParallel(0xc0015aac40)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:500 +0x59
k8s.io/minikube/test/integration.TestCertExpiration(0xc0015aac40)
	/home/jenkins/workspace/Build_Cross/test/integration/cert_options_test.go:115 +0x39
testing.tRunner(0xc0015aac40, 0x3c52c70)
	/usr/local/go/src/testing/testing.go:1792 +0xf4
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1851 +0x413

                                                
                                                
goroutine 206 [IO wait, 102 minutes]:
internal/poll.runtime_pollWait(0x7d3456457760, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc0005aae00?, 0x900000036?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0xc0005aae00)
	/usr/local/go/src/internal/poll/fd_unix.go:620 +0x295
net.(*netFD).accept(0xc0005aae00)
	/usr/local/go/src/net/fd_unix.go:172 +0x29
net.(*TCPListener).accept(0xc0003b3100)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x1b
net.(*TCPListener).Accept(0xc0003b3100)
	/usr/local/go/src/net/tcpsock.go:380 +0x30
net/http.(*Server).Serve(0xc0003a6800, {0x3f9cdd0, 0xc0003b3100})
	/usr/local/go/src/net/http/server.go:3424 +0x30c
net/http.(*Server).ListenAndServe(0xc0003a6800)
	/usr/local/go/src/net/http/server.go:3350 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(...)
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2218
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 203
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2217 +0x129

                                                
                                                
goroutine 494 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x3fc1f60, {{0x3fb6f88, 0xc00022a340?}, 0xc0007fa850?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x378
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 493
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x272

                                                
                                                
goroutine 2027 [syscall, 4 minutes]:
syscall.Syscall6(0xf7, 0x3, 0xd, 0xc0005e7a08, 0x4, 0xc000130a20, 0x0)
	/usr/local/go/src/syscall/syscall_linux.go:95 +0x39
internal/syscall/unix.Waitid(0xc0005e7a36?, 0xc0005e7b60?, 0x5930ab?, 0x7ffd5acf31ab?, 0x0?)
	/usr/local/go/src/internal/syscall/unix/waitid_linux.go:18 +0x39
os.(*Process).pidfdWait.func1(...)
	/usr/local/go/src/os/pidfd_linux.go:106
os.ignoringEINTR(...)
	/usr/local/go/src/os/file_posix.go:251
os.(*Process).pidfdWait(0xc000142030?)
	/usr/local/go/src/os/pidfd_linux.go:105 +0x209
os.(*Process).wait(0xc000580008?)
	/usr/local/go/src/os/exec_unix.go:27 +0x25
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:358
os/exec.(*Cmd).Wait(0xc00074c000)
	/usr/local/go/src/os/exec/exec.go:922 +0x45
os/exec.(*Cmd).Run(0xc00074c000)
	/usr/local/go/src/os/exec/exec.go:626 +0x2d
k8s.io/minikube/test/integration.Run(0xc0015ba000, 0xc00074c000)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.validateNameConflict({0x3faf4f0, 0xc0002ae460}, 0xc0015ba000, {0xc000122a00, 0x10})
	/home/jenkins/workspace/Build_Cross/test/integration/multinode_test.go:464 +0x48d
k8s.io/minikube/test/integration.TestMultiNode.func1.1(0xc0015ba000?)
	/home/jenkins/workspace/Build_Cross/test/integration/multinode_test.go:86 +0x6b
testing.tRunner(0xc0015ba000, 0xc0005f6080)
	/usr/local/go/src/testing/testing.go:1792 +0xf4
created by testing.(*T).Run in goroutine 1793
	/usr/local/go/src/testing/testing.go:1851 +0x413

                                                
                                                
goroutine 495 [chan receive, 75 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0xc0028322a0, 0xc0001101c0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x295
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 493
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x614

                                                
                                                
goroutine 459 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3faf870, 0xc0001101c0}, 0xc0000bb750, 0xc001eeaf98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3faf870, 0xc0001101c0}, 0x70?, 0xc0000bb750, 0xc0000bb798)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3faf870?, 0xc0001101c0?}, 0x0?, 0xc0003508d0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x593245?, 0xc0002e2a80?, 0xc0007fae70?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 495
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x286

                                                
                                                
goroutine 458 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc0003b39d0, 0x23)
	/usr/local/go/src/runtime/sema.go:597 +0x159
sync.(*Cond).Wait(0xc0017d1ce0?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3fc5360)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x86
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0028322a0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x44
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x51594e8?, 0x11?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x13
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x3faf870?, 0xc0001101c0?}, 0x41b1b4?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x51
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x3faf870, 0xc0001101c0}, 0xc0017d1f50, {0x3f66880, 0xc000574a50}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xe5
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0017e8e00?, {0x3f66880?, 0xc000574a50?}, 0x0?, 0x55d160?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x46
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0002e9da0, 0x3b9aca00, 0x0, 0x1, 0xc0001101c0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 495
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x1d9

                                                
                                                
goroutine 565 [chan send, 75 minutes]:
os/exec.(*Cmd).watchCtx(0xc00074c480, 0xc000084a80)
	/usr/local/go/src/os/exec/exec.go:814 +0x3e5
created by os/exec.(*Cmd).Start in goroutine 366
	/usr/local/go/src/os/exec/exec.go:775 +0x8f3

                                                
                                                
goroutine 627 [chan send, 75 minutes]:
os/exec.(*Cmd).watchCtx(0xc0002e2d80, 0xc0009eeaf0)
	/usr/local/go/src/os/exec/exec.go:814 +0x3e5
created by os/exec.(*Cmd).Start in goroutine 626
	/usr/local/go/src/os/exec/exec.go:775 +0x8f3

                                                
                                                
goroutine 2081 [select, 4 minutes]:
os/exec.(*Cmd).watchCtx(0xc00074c000, 0xc0007fa7e0)
	/usr/local/go/src/os/exec/exec.go:789 +0xb2
created by os/exec.(*Cmd).Start in goroutine 2027
	/usr/local/go/src/os/exec/exec.go:775 +0x8f3

                                                
                                                
goroutine 460 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 459
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2079 [IO wait, 4 minutes]:
internal/poll.runtime_pollWait(0x7d3456457530, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc000b1a540?, 0xc0015b0a91?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc000b1a540, {0xc0015b0a91, 0x56f, 0x56f})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0000bc090, {0xc0015b0a91?, 0x41835f?, 0x2c44020?})
	/usr/local/go/src/os/file.go:124 +0x4f
bytes.(*Buffer).ReadFrom(0xc0008e0240, {0x3f64c80, 0xc00011c1f8})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3f64e00, 0xc0008e0240}, {0x3f64c80, 0xc00011c1f8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc0000bc090?, {0x3f64e00, 0xc0008e0240})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0xc0000bc090, {0x3f64e00, 0xc0008e0240})
	/usr/local/go/src/os/file.go:253 +0x9c
io.copyBuffer({0x3f64e00, 0xc0008e0240}, {0x3f64d00, 0xc0000bc090}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:596 +0x34
os/exec.(*Cmd).Start.func2(0xc0005f6080?)
	/usr/local/go/src/os/exec/exec.go:749 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2027
	/usr/local/go/src/os/exec/exec.go:748 +0x92b

                                                
                                                
goroutine 665 [chan send, 75 minutes]:
os/exec.(*Cmd).watchCtx(0xc00274e000, 0xc002752070)
	/usr/local/go/src/os/exec/exec.go:814 +0x3e5
created by os/exec.(*Cmd).Start in goroutine 664
	/usr/local/go/src/os/exec/exec.go:775 +0x8f3

                                                
                                                
goroutine 2080 [IO wait]:
internal/poll.runtime_pollWait(0x7d3456457df0, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc000b1a660?, 0xc0002bb769?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc000b1a660, {0xc0002bb769, 0x897, 0x897})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0000bc0a8, {0xc0002bb769?, 0x41835f?, 0x2c44020?})
	/usr/local/go/src/os/file.go:124 +0x4f
bytes.(*Buffer).ReadFrom(0xc0008e0270, {0x3f64c80, 0xc0005f8090})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3f64e00, 0xc0008e0270}, {0x3f64c80, 0xc0005f8090}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc0000bc0a8?, {0x3f64e00, 0xc0008e0270})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0xc0000bc0a8, {0x3f64e00, 0xc0008e0270})
	/usr/local/go/src/os/file.go:253 +0x9c
io.copyBuffer({0x3f64e00, 0xc0008e0270}, {0x3f64d00, 0xc0000bc0a8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:596 +0x34
os/exec.(*Cmd).Start.func2(0x0?)
	/usr/local/go/src/os/exec/exec.go:749 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2027
	/usr/local/go/src/os/exec/exec.go:748 +0x92b

                                                
                                                
goroutine 1835 [chan receive, 29 minutes]:
testing.(*T).Run(0xc0015bac40, {0x31f4138?, 0x1a3185c5000?}, 0xc0009ea7b0)
	/usr/local/go/src/testing/testing.go:1859 +0x431
k8s.io/minikube/test/integration.TestMultiNode(0xc0015bac40)
	/home/jenkins/workspace/Build_Cross/test/integration/multinode_test.go:59 +0x3c5
testing.tRunner(0xc0015bac40, 0x3c52d60)
	/usr/local/go/src/testing/testing.go:1792 +0xf4
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1851 +0x413

                                                
                                    

Test pass (92/166)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 5.56
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.07
9 TestDownloadOnly/v1.28.0/DeleteAll 0.23
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.1/json-events 4.68
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.07
18 TestDownloadOnly/v1.34.1/DeleteAll 0.23
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.14
20 TestDownloadOnlyKic 0.43
21 TestBinaryMirror 0.85
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
39 TestErrorSpam/start 0.69
40 TestErrorSpam/status 0.9
41 TestErrorSpam/pause 1.33
42 TestErrorSpam/unpause 1.36
43 TestErrorSpam/stop 1.4
46 TestFunctional/serial/CopySyncFile 0
48 TestFunctional/serial/AuditLog 0
50 TestFunctional/serial/KubeContext 0.05
54 TestFunctional/serial/CacheCmd/cache/add_remote 3.06
55 TestFunctional/serial/CacheCmd/cache/add_local 1.73
56 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
57 TestFunctional/serial/CacheCmd/cache/list 0.05
58 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
59 TestFunctional/serial/CacheCmd/cache/cache_reload 1.71
60 TestFunctional/serial/CacheCmd/cache/delete 0.11
65 TestFunctional/serial/LogsCmd 0.93
66 TestFunctional/serial/LogsFileCmd 0.93
69 TestFunctional/parallel/ConfigCmd 0.42
71 TestFunctional/parallel/DryRun 0.43
72 TestFunctional/parallel/InternationalLanguage 0.18
78 TestFunctional/parallel/AddonsCmd 0.16
81 TestFunctional/parallel/SSHCmd 0.67
82 TestFunctional/parallel/CpCmd 2.13
84 TestFunctional/parallel/FileSync 0.29
85 TestFunctional/parallel/CertSync 1.79
91 TestFunctional/parallel/NonActiveRuntimeDisabled 0.56
93 TestFunctional/parallel/License 0.47
102 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
106 TestFunctional/parallel/ProfileCmd/profile_not_create 0.43
107 TestFunctional/parallel/ProfileCmd/profile_list 0.4
108 TestFunctional/parallel/ProfileCmd/profile_json_output 0.41
110 TestFunctional/parallel/Version/short 0.06
111 TestFunctional/parallel/Version/components 0.54
112 TestFunctional/parallel/ImageCommands/ImageListShort 0.23
113 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
114 TestFunctional/parallel/ImageCommands/ImageListJson 0.23
115 TestFunctional/parallel/ImageCommands/ImageListYaml 0.23
116 TestFunctional/parallel/ImageCommands/ImageBuild 3.47
117 TestFunctional/parallel/ImageCommands/Setup 1.56
118 TestFunctional/parallel/MountCmd/specific-port 2.01
122 TestFunctional/parallel/MountCmd/VerifyCleanup 1.58
124 TestFunctional/parallel/ImageCommands/ImageRemove 0.53
127 TestFunctional/parallel/UpdateContextCmd/no_changes 0.16
128 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.14
129 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.15
133 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
134 TestFunctional/delete_echo-server_images 0.04
135 TestFunctional/delete_my-image_image 0.02
136 TestFunctional/delete_minikube_cached_images 0.02
164 TestJSONOutput/start/Audit 0
169 TestJSONOutput/pause/Command 0.48
170 TestJSONOutput/pause/Audit 0
172 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
173 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
175 TestJSONOutput/unpause/Command 0.46
176 TestJSONOutput/unpause/Audit 0
178 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
179 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
181 TestJSONOutput/stop/Command 1.24
182 TestJSONOutput/stop/Audit 0
184 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
186 TestErrorJSONOutput 0.22
188 TestKicCustomNetwork/create_custom_network 28.71
189 TestKicCustomNetwork/use_default_bridge_network 25.37
190 TestKicExistingNetwork 24.48
191 TestKicCustomSubnet 25.71
192 TestKicStaticIP 26.44
193 TestMainNoArgs 0.05
197 TestMountStart/serial/StartWithMountFirst 5.92
198 TestMountStart/serial/VerifyMountFirst 0.27
199 TestMountStart/serial/StartWithMountSecond 8.32
200 TestMountStart/serial/VerifyMountSecond 0.27
201 TestMountStart/serial/DeleteFirst 1.71
202 TestMountStart/serial/VerifyMountPostDelete 0.28
203 TestMountStart/serial/Stop 1.21
204 TestMountStart/serial/RestartStopped 7.34
205 TestMountStart/serial/VerifyMountPostStop 0.28
x
+
TestDownloadOnly/v1.28.0/json-events (5.56s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-667039 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-667039 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.559791327s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (5.56s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1014 19:14:46.906943  417373 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1014 19:14:46.907030  417373 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-413763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-667039
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-667039: exit status 85 (67.824901ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-667039 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-667039 │ jenkins │ v1.37.0 │ 14 Oct 25 19:14 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/14 19:14:41
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1014 19:14:41.392192  417385 out.go:360] Setting OutFile to fd 1 ...
	I1014 19:14:41.392445  417385 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 19:14:41.392455  417385 out.go:374] Setting ErrFile to fd 2...
	I1014 19:14:41.392459  417385 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 19:14:41.392681  417385 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-413763/.minikube/bin
	W1014 19:14:41.392846  417385 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21409-413763/.minikube/config/config.json: open /home/jenkins/minikube-integration/21409-413763/.minikube/config/config.json: no such file or directory
	I1014 19:14:41.393318  417385 out.go:368] Setting JSON to true
	I1014 19:14:41.394393  417385 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":7027,"bootTime":1760462254,"procs":221,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1014 19:14:41.394512  417385 start.go:141] virtualization: kvm guest
	I1014 19:14:41.396980  417385 out.go:99] [download-only-667039] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1014 19:14:41.397140  417385 notify.go:220] Checking for updates...
	W1014 19:14:41.397141  417385 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21409-413763/.minikube/cache/preloaded-tarball: no such file or directory
	I1014 19:14:41.398731  417385 out.go:171] MINIKUBE_LOCATION=21409
	I1014 19:14:41.400638  417385 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 19:14:41.402236  417385 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21409-413763/kubeconfig
	I1014 19:14:41.403481  417385 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-413763/.minikube
	I1014 19:14:41.404690  417385 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1014 19:14:41.407348  417385 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1014 19:14:41.407588  417385 driver.go:421] Setting default libvirt URI to qemu:///system
	I1014 19:14:41.432953  417385 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1014 19:14:41.433124  417385 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 19:14:41.498886  417385 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:64 SystemTime:2025-10-14 19:14:41.487308373 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1014 19:14:41.499045  417385 docker.go:318] overlay module found
	I1014 19:14:41.500843  417385 out.go:99] Using the docker driver based on user configuration
	I1014 19:14:41.500885  417385 start.go:305] selected driver: docker
	I1014 19:14:41.500895  417385 start.go:925] validating driver "docker" against <nil>
	I1014 19:14:41.501008  417385 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 19:14:41.563366  417385 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:64 SystemTime:2025-10-14 19:14:41.552630089 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1014 19:14:41.563544  417385 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1014 19:14:41.564004  417385 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1014 19:14:41.564140  417385 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1014 19:14:41.566220  417385 out.go:171] Using Docker driver with root privileges
	I1014 19:14:41.567454  417385 cni.go:84] Creating CNI manager for ""
	I1014 19:14:41.567505  417385 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1014 19:14:41.567516  417385 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1014 19:14:41.567608  417385 start.go:349] cluster config:
	{Name:download-only-667039 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-667039 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 19:14:41.569009  417385 out.go:99] Starting "download-only-667039" primary control-plane node in "download-only-667039" cluster
	I1014 19:14:41.569050  417385 cache.go:123] Beginning downloading kic base image for docker with crio
	I1014 19:14:41.570422  417385 out.go:99] Pulling base image v0.0.48-1759745255-21703 ...
	I1014 19:14:41.570455  417385 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1014 19:14:41.570550  417385 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1014 19:14:41.587725  417385 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 to local cache
	I1014 19:14:41.587912  417385 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local cache directory
	I1014 19:14:41.588003  417385 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 to local cache
	I1014 19:14:41.590049  417385 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1014 19:14:41.590079  417385 cache.go:58] Caching tarball of preloaded images
	I1014 19:14:41.590210  417385 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1014 19:14:41.592250  417385 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1014 19:14:41.592277  417385 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1014 19:14:41.616955  417385 preload.go:290] Got checksum from GCS API "72bc7f8573f574c02d8c9a9b3496176b"
	I1014 19:14:41.617077  417385 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/21409-413763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1014 19:14:44.918146  417385 cache.go:61] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1014 19:14:44.918611  417385 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/download-only-667039/config.json ...
	I1014 19:14:44.918836  417385 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/download-only-667039/config.json: {Name:mke53cfcb39b80211289da7706cbd640999a070b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 19:14:44.919061  417385 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1014 19:14:44.919269  417385 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/21409-413763/.minikube/cache/linux/amd64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-667039 host does not exist
	  To start a cluster, run: "minikube start -p download-only-667039"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-667039
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (4.68s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-102449 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-102449 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (4.680811791s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (4.68s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1014 19:14:52.030399  417373 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1014 19:14:52.030451  417373 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-413763/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-102449
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-102449: exit status 85 (71.284512ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-667039 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-667039 │ jenkins │ v1.37.0 │ 14 Oct 25 19:14 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 14 Oct 25 19:14 UTC │ 14 Oct 25 19:14 UTC │
	│ delete  │ -p download-only-667039                                                                                                                                                   │ download-only-667039 │ jenkins │ v1.37.0 │ 14 Oct 25 19:14 UTC │ 14 Oct 25 19:14 UTC │
	│ start   │ -o=json --download-only -p download-only-102449 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-102449 │ jenkins │ v1.37.0 │ 14 Oct 25 19:14 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/14 19:14:47
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1014 19:14:47.394705  417735 out.go:360] Setting OutFile to fd 1 ...
	I1014 19:14:47.395029  417735 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 19:14:47.395039  417735 out.go:374] Setting ErrFile to fd 2...
	I1014 19:14:47.395043  417735 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 19:14:47.395254  417735 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-413763/.minikube/bin
	I1014 19:14:47.395736  417735 out.go:368] Setting JSON to true
	I1014 19:14:47.396667  417735 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":7033,"bootTime":1760462254,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1014 19:14:47.396785  417735 start.go:141] virtualization: kvm guest
	I1014 19:14:47.398774  417735 out.go:99] [download-only-102449] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1014 19:14:47.398909  417735 notify.go:220] Checking for updates...
	I1014 19:14:47.400335  417735 out.go:171] MINIKUBE_LOCATION=21409
	I1014 19:14:47.401744  417735 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 19:14:47.403098  417735 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21409-413763/kubeconfig
	I1014 19:14:47.404425  417735 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-413763/.minikube
	I1014 19:14:47.405952  417735 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1014 19:14:47.408837  417735 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1014 19:14:47.409168  417735 driver.go:421] Setting default libvirt URI to qemu:///system
	I1014 19:14:47.434273  417735 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1014 19:14:47.434386  417735 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 19:14:47.501253  417735 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:52 SystemTime:2025-10-14 19:14:47.490506021 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1014 19:14:47.501354  417735 docker.go:318] overlay module found
	I1014 19:14:47.503132  417735 out.go:99] Using the docker driver based on user configuration
	I1014 19:14:47.503171  417735 start.go:305] selected driver: docker
	I1014 19:14:47.503178  417735 start.go:925] validating driver "docker" against <nil>
	I1014 19:14:47.503261  417735 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 19:14:47.565425  417735 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:52 SystemTime:2025-10-14 19:14:47.555988278 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1014 19:14:47.565650  417735 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1014 19:14:47.566119  417735 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1014 19:14:47.566248  417735 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1014 19:14:47.568436  417735 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-102449 host does not exist
	  To start a cluster, run: "minikube start -p download-only-102449"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-102449
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.43s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-042272 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-042272" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-042272
--- PASS: TestDownloadOnlyKic (0.43s)

                                                
                                    
x
+
TestBinaryMirror (0.85s)

                                                
                                                
=== RUN   TestBinaryMirror
I1014 19:14:53.183720  417373 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-194366 --alsologtostderr --binary-mirror http://127.0.0.1:45401 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-194366" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-194366
--- PASS: TestBinaryMirror (0.85s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-995790
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-995790: exit status 85 (65.50005ms)

                                                
                                                
-- stdout --
	* Profile "addons-995790" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-995790"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-995790
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-995790: exit status 85 (66.757519ms)

                                                
                                                
-- stdout --
	* Profile "addons-995790" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-995790"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestErrorSpam/start (0.69s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-442016 --log_dir /tmp/nospam-442016 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-442016 --log_dir /tmp/nospam-442016 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-442016 --log_dir /tmp/nospam-442016 start --dry-run
--- PASS: TestErrorSpam/start (0.69s)

                                                
                                    
x
+
TestErrorSpam/status (0.9s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-442016 --log_dir /tmp/nospam-442016 status
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-442016 --log_dir /tmp/nospam-442016 status: exit status 6 (293.308175ms)

                                                
                                                
-- stdout --
	nospam-442016
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1014 19:31:56.706496  429853 status.go:458] kubeconfig endpoint: get endpoint: "nospam-442016" does not appear in /home/jenkins/minikube-integration/21409-413763/kubeconfig

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-442016 --log_dir /tmp/nospam-442016 status" failed: exit status 6
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-442016 --log_dir /tmp/nospam-442016 status
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-442016 --log_dir /tmp/nospam-442016 status: exit status 6 (295.219158ms)

                                                
                                                
-- stdout --
	nospam-442016
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1014 19:31:57.001488  429965 status.go:458] kubeconfig endpoint: get endpoint: "nospam-442016" does not appear in /home/jenkins/minikube-integration/21409-413763/kubeconfig

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-442016 --log_dir /tmp/nospam-442016 status" failed: exit status 6
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-442016 --log_dir /tmp/nospam-442016 status
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-442016 --log_dir /tmp/nospam-442016 status: exit status 6 (307.962204ms)

                                                
                                                
-- stdout --
	nospam-442016
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1014 19:31:57.309876  430093 status.go:458] kubeconfig endpoint: get endpoint: "nospam-442016" does not appear in /home/jenkins/minikube-integration/21409-413763/kubeconfig

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-442016 --log_dir /tmp/nospam-442016 status" failed: exit status 6
--- PASS: TestErrorSpam/status (0.90s)

                                                
                                    
x
+
TestErrorSpam/pause (1.33s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-442016 --log_dir /tmp/nospam-442016 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-442016 --log_dir /tmp/nospam-442016 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-442016 --log_dir /tmp/nospam-442016 pause
--- PASS: TestErrorSpam/pause (1.33s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.36s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-442016 --log_dir /tmp/nospam-442016 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-442016 --log_dir /tmp/nospam-442016 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-442016 --log_dir /tmp/nospam-442016 unpause
--- PASS: TestErrorSpam/unpause (1.36s)

                                                
                                    
x
+
TestErrorSpam/stop (1.4s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-442016 --log_dir /tmp/nospam-442016 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-442016 --log_dir /tmp/nospam-442016 stop: (1.206270708s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-442016 --log_dir /tmp/nospam-442016 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-442016 --log_dir /tmp/nospam-442016 stop
--- PASS: TestErrorSpam/stop (1.40s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21409-413763/.minikube/files/etc/test/nested/copy/417373/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-744288 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-744288 cache add registry.k8s.io/pause:3.1: (1.020206534s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-744288 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-744288 cache add registry.k8s.io/pause:3.3: (1.053360195s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-744288 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.73s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-744288 /tmp/TestFunctionalserialCacheCmdcacheadd_local3817603898/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-744288 cache add minikube-local-cache-test:functional-744288
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-744288 cache add minikube-local-cache-test:functional-744288: (1.378467786s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-744288 cache delete minikube-local-cache-test:functional-744288
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-744288
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.73s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-744288 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.71s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-744288 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-744288 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-744288 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (292.293734ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-744288 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-744288 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.71s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.93s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-744288 logs
--- PASS: TestFunctional/serial/LogsCmd (0.93s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.93s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-744288 logs --file /tmp/TestFunctionalserialLogsFileCmd1977859215/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.93s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-744288 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-744288 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-744288 config get cpus: exit status 14 (72.923292ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-744288 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-744288 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-744288 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-744288 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-744288 config get cpus: exit status 14 (60.047829ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-744288 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-744288 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (187.140905ms)

                                                
                                                
-- stdout --
	* [functional-744288] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21409
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21409-413763/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-413763/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 19:59:15.560556  460836 out.go:360] Setting OutFile to fd 1 ...
	I1014 19:59:15.560662  460836 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 19:59:15.560675  460836 out.go:374] Setting ErrFile to fd 2...
	I1014 19:59:15.560681  460836 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 19:59:15.560898  460836 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-413763/.minikube/bin
	I1014 19:59:15.561468  460836 out.go:368] Setting JSON to false
	I1014 19:59:15.562541  460836 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":9702,"bootTime":1760462254,"procs":193,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1014 19:59:15.562651  460836 start.go:141] virtualization: kvm guest
	I1014 19:59:15.565020  460836 out.go:179] * [functional-744288] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1014 19:59:15.566580  460836 out.go:179]   - MINIKUBE_LOCATION=21409
	I1014 19:59:15.566624  460836 notify.go:220] Checking for updates...
	I1014 19:59:15.569260  460836 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 19:59:15.570873  460836 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-413763/kubeconfig
	I1014 19:59:15.575024  460836 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-413763/.minikube
	I1014 19:59:15.576588  460836 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1014 19:59:15.577970  460836 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 19:59:15.579791  460836 config.go:182] Loaded profile config "functional-744288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 19:59:15.580296  460836 driver.go:421] Setting default libvirt URI to qemu:///system
	I1014 19:59:15.607401  460836 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1014 19:59:15.607521  460836 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 19:59:15.673192  460836 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-14 19:59:15.662664159 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1014 19:59:15.673305  460836 docker.go:318] overlay module found
	I1014 19:59:15.675238  460836 out.go:179] * Using the docker driver based on existing profile
	I1014 19:59:15.676561  460836 start.go:305] selected driver: docker
	I1014 19:59:15.676585  460836 start.go:925] validating driver "docker" against &{Name:functional-744288 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-744288 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 19:59:15.676748  460836 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 19:59:15.678818  460836 out.go:203] 
	W1014 19:59:15.680270  460836 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1014 19:59:15.681816  460836 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-744288 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-744288 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-744288 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (180.999731ms)

                                                
                                                
-- stdout --
	* [functional-744288] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21409
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21409-413763/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-413763/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 19:59:15.984416  461207 out.go:360] Setting OutFile to fd 1 ...
	I1014 19:59:15.984586  461207 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 19:59:15.984597  461207 out.go:374] Setting ErrFile to fd 2...
	I1014 19:59:15.984604  461207 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 19:59:15.985010  461207 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-413763/.minikube/bin
	I1014 19:59:15.985511  461207 out.go:368] Setting JSON to false
	I1014 19:59:15.986502  461207 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":9702,"bootTime":1760462254,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1014 19:59:15.986600  461207 start.go:141] virtualization: kvm guest
	I1014 19:59:15.988840  461207 out.go:179] * [functional-744288] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1014 19:59:15.990551  461207 out.go:179]   - MINIKUBE_LOCATION=21409
	I1014 19:59:15.990567  461207 notify.go:220] Checking for updates...
	I1014 19:59:15.993365  461207 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 19:59:15.994948  461207 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-413763/kubeconfig
	I1014 19:59:15.997169  461207 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-413763/.minikube
	I1014 19:59:15.999150  461207 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1014 19:59:16.000873  461207 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 19:59:16.003345  461207 config.go:182] Loaded profile config "functional-744288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 19:59:16.004102  461207 driver.go:421] Setting default libvirt URI to qemu:///system
	I1014 19:59:16.029353  461207 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1014 19:59:16.029472  461207 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 19:59:16.097661  461207 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-14 19:59:16.086601927 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1014 19:59:16.097897  461207 docker.go:318] overlay module found
	I1014 19:59:16.099803  461207 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1014 19:59:16.101025  461207 start.go:305] selected driver: docker
	I1014 19:59:16.101045  461207 start.go:925] validating driver "docker" against &{Name:functional-744288 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-744288 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 19:59:16.101172  461207 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 19:59:16.103591  461207 out.go:203] 
	W1014 19:59:16.105109  461207 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1014 19:59:16.106244  461207 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-744288 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-744288 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-744288 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-744288 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-744288 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-744288 ssh -n functional-744288 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-744288 cp functional-744288:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3238097529/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-744288 ssh -n functional-744288 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-744288 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-744288 ssh -n functional-744288 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.13s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/417373/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-744288 ssh "sudo cat /etc/test/nested/copy/417373/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/417373.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-744288 ssh "sudo cat /etc/ssl/certs/417373.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/417373.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-744288 ssh "sudo cat /usr/share/ca-certificates/417373.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-744288 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/4173732.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-744288 ssh "sudo cat /etc/ssl/certs/4173732.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/4173732.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-744288 ssh "sudo cat /usr/share/ca-certificates/4173732.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-744288 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.79s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-744288 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-744288 ssh "sudo systemctl is-active docker": exit status 1 (281.210761ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-744288 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-744288 ssh "sudo systemctl is-active containerd": exit status 1 (279.524231ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-744288 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "339.871785ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "55.297775ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "339.658815ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "65.7157ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-744288 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-744288 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-744288 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-744288 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-744288 image ls --format short --alsologtostderr:
I1014 19:59:27.236537  467213 out.go:360] Setting OutFile to fd 1 ...
I1014 19:59:27.236829  467213 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1014 19:59:27.236838  467213 out.go:374] Setting ErrFile to fd 2...
I1014 19:59:27.236843  467213 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1014 19:59:27.237018  467213 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-413763/.minikube/bin
I1014 19:59:27.237602  467213 config.go:182] Loaded profile config "functional-744288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1014 19:59:27.237695  467213 config.go:182] Loaded profile config "functional-744288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1014 19:59:27.238097  467213 cli_runner.go:164] Run: docker container inspect functional-744288 --format={{.State.Status}}
I1014 19:59:27.257415  467213 ssh_runner.go:195] Run: systemctl --version
I1014 19:59:27.257490  467213 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-744288
I1014 19:59:27.276958  467213 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/functional-744288/id_rsa Username:docker}
I1014 19:59:27.383634  467213 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-744288 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-744288 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ c3994bc696102 │ 89MB   │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ c80c8dbafe7dd │ 76MB   │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ fc25172553d79 │ 73.1MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ 7dd6aaa1717ab │ 53.8MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-744288 image ls --format table --alsologtostderr:
I1014 19:59:27.702687  467496 out.go:360] Setting OutFile to fd 1 ...
I1014 19:59:27.702959  467496 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1014 19:59:27.702970  467496 out.go:374] Setting ErrFile to fd 2...
I1014 19:59:27.702975  467496 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1014 19:59:27.703161  467496 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-413763/.minikube/bin
I1014 19:59:27.703741  467496 config.go:182] Loaded profile config "functional-744288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1014 19:59:27.703870  467496 config.go:182] Loaded profile config "functional-744288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1014 19:59:27.704321  467496 cli_runner.go:164] Run: docker container inspect functional-744288 --format={{.State.Status}}
I1014 19:59:27.724939  467496 ssh_runner.go:195] Run: systemctl --version
I1014 19:59:27.725015  467496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-744288
I1014 19:59:27.745588  467496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/functional-744288/id_rsa Username:docker}
I1014 19:59:27.849576  467496 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-744288 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-744288 image ls --format json --alsologtostderr:
[{"id":"c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97","repoDigests":["registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964","registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"89046001"},{"id":"c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89","registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"76004181"},{"id":"7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813","repoDigests":["registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31","registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a5
76483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"53844823"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa1
9"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195976448"},{"id":"fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a","registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"73138073"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"0184
c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-744288 image ls --format json --alsologtostderr:
I1014 19:59:27.469615  467364 out.go:360] Setting OutFile to fd 1 ...
I1014 19:59:27.470086  467364 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1014 19:59:27.470104  467364 out.go:374] Setting ErrFile to fd 2...
I1014 19:59:27.470111  467364 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1014 19:59:27.470410  467364 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-413763/.minikube/bin
I1014 19:59:27.471119  467364 config.go:182] Loaded profile config "functional-744288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1014 19:59:27.471231  467364 config.go:182] Loaded profile config "functional-744288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1014 19:59:27.471859  467364 cli_runner.go:164] Run: docker container inspect functional-744288 --format={{.State.Status}}
I1014 19:59:27.490923  467364 ssh_runner.go:195] Run: systemctl --version
I1014 19:59:27.490980  467364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-744288
I1014 19:59:27.510093  467364 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/functional-744288/id_rsa Username:docker}
I1014 19:59:27.615104  467364 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-744288 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-744288 image ls --format yaml --alsologtostderr:
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195976448"
- id: c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "89046001"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
- registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "76004181"
- id: fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
- registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "73138073"
- id: 7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "53844823"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-744288 image ls --format yaml --alsologtostderr:
I1014 19:59:27.771805  467540 out.go:360] Setting OutFile to fd 1 ...
I1014 19:59:27.772073  467540 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1014 19:59:27.772083  467540 out.go:374] Setting ErrFile to fd 2...
I1014 19:59:27.772088  467540 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1014 19:59:27.772299  467540 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-413763/.minikube/bin
I1014 19:59:27.772917  467540 config.go:182] Loaded profile config "functional-744288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1014 19:59:27.773026  467540 config.go:182] Loaded profile config "functional-744288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1014 19:59:27.773386  467540 cli_runner.go:164] Run: docker container inspect functional-744288 --format={{.State.Status}}
I1014 19:59:27.792318  467540 ssh_runner.go:195] Run: systemctl --version
I1014 19:59:27.792377  467540 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-744288
I1014 19:59:27.810308  467540 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/functional-744288/id_rsa Username:docker}
I1014 19:59:27.914167  467540 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-744288 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-744288 ssh pgrep buildkitd: exit status 1 (299.047035ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-744288 image build -t localhost/my-image:functional-744288 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-744288 image build -t localhost/my-image:functional-744288 testdata/build --alsologtostderr: (2.948082441s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-744288 image build -t localhost/my-image:functional-744288 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> b8b91d43a93
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-744288
--> bd6759228d6
Successfully tagged localhost/my-image:functional-744288
bd6759228d6b5ef37d0287ae44b04aa7dff5efca20f0f7982bec1e187be8a661
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-744288 image build -t localhost/my-image:functional-744288 testdata/build --alsologtostderr:
I1014 19:59:28.242574  467823 out.go:360] Setting OutFile to fd 1 ...
I1014 19:59:28.242897  467823 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1014 19:59:28.242909  467823 out.go:374] Setting ErrFile to fd 2...
I1014 19:59:28.242913  467823 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1014 19:59:28.243185  467823 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-413763/.minikube/bin
I1014 19:59:28.244125  467823 config.go:182] Loaded profile config "functional-744288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1014 19:59:28.244986  467823 config.go:182] Loaded profile config "functional-744288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1014 19:59:28.245559  467823 cli_runner.go:164] Run: docker container inspect functional-744288 --format={{.State.Status}}
I1014 19:59:28.264777  467823 ssh_runner.go:195] Run: systemctl --version
I1014 19:59:28.264848  467823 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-744288
I1014 19:59:28.283588  467823 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/21409-413763/.minikube/machines/functional-744288/id_rsa Username:docker}
I1014 19:59:28.389617  467823 build_images.go:161] Building image from path: /tmp/build.3918449166.tar
I1014 19:59:28.389686  467823 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1014 19:59:28.398431  467823 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3918449166.tar
I1014 19:59:28.402821  467823 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3918449166.tar: stat -c "%s %y" /var/lib/minikube/build/build.3918449166.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3918449166.tar': No such file or directory
I1014 19:59:28.402865  467823 ssh_runner.go:362] scp /tmp/build.3918449166.tar --> /var/lib/minikube/build/build.3918449166.tar (3072 bytes)
I1014 19:59:28.421371  467823 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3918449166
I1014 19:59:28.429619  467823 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3918449166 -xf /var/lib/minikube/build/build.3918449166.tar
I1014 19:59:28.437862  467823 crio.go:315] Building image: /var/lib/minikube/build/build.3918449166
I1014 19:59:28.437944  467823 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-744288 /var/lib/minikube/build/build.3918449166 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1014 19:59:31.109358  467823 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-744288 /var/lib/minikube/build/build.3918449166 --cgroup-manager=cgroupfs: (2.671382906s)
I1014 19:59:31.109442  467823 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3918449166
I1014 19:59:31.118555  467823 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3918449166.tar
I1014 19:59:31.127016  467823 build_images.go:217] Built localhost/my-image:functional-744288 from /tmp/build.3918449166.tar
I1014 19:59:31.127051  467823 build_images.go:133] succeeded building to: functional-744288
I1014 19:59:31.127056  467823 build_images.go:134] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-744288 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.529072424s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-744288
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.56s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-744288 /tmp/TestFunctionalparallelMountCmdspecific-port2832358850/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-744288 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-744288 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (320.373724ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1014 19:59:20.729959  417373 retry.go:31] will retry after 614.192308ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-744288 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-744288 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-744288 /tmp/TestFunctionalparallelMountCmdspecific-port2832358850/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-744288 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-744288 ssh "sudo umount -f /mount-9p": exit status 1 (293.919283ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-744288 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-744288 /tmp/TestFunctionalparallelMountCmdspecific-port2832358850/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.01s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-744288 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3629867237/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-744288 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3629867237/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-744288 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3629867237/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-744288 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-744288 ssh "findmnt -T" /mount1: exit status 1 (347.455166ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1014 19:59:22.771796  417373 retry.go:31] will retry after 316.426388ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-744288 ssh "findmnt -T" /mount1
I1014 19:59:23.153069  417373 retry.go:31] will retry after 6.293813415s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-744288 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-744288 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-744288 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-744288 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3629867237/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-744288 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3629867237/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-744288 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3629867237/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-744288 image rm kicbase/echo-server:functional-744288 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-744288 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-744288 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-744288 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-744288 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-744288 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: exit status 103
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-744288
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-744288
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-744288
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.48s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-239279 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.48s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.46s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-239279 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.46s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (1.24s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-239279 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-239279 --output=json --user=testUser: (1.238490528s)
--- PASS: TestJSONOutput/stop/Command (1.24s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-802525 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-802525 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (70.559446ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"83b8b6af-584e-4cae-96d7-22d329e25231","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-802525] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"66c3129c-0872-400e-af4d-488ff8f7731c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21409"}}
	{"specversion":"1.0","id":"28ff6861-a287-49c5-b0a1-fcc035c0b6ac","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"31c064ca-012f-4b6c-9cc6-c2f95e8a6642","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21409-413763/kubeconfig"}}
	{"specversion":"1.0","id":"7f284f22-eacc-4aba-b349-aef60700cf7d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-413763/.minikube"}}
	{"specversion":"1.0","id":"4f6a0258-96bd-4c2b-9197-6ac6f99aeb0f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"e42b344d-419c-475b-b91c-561f44dbd33e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"f3e092b2-189a-4940-ab10-ae4a5f11f1a0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-802525" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-802525
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (28.71s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-936394 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-936394 --network=: (26.554360843s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-936394" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-936394
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-936394: (2.131687518s)
--- PASS: TestKicCustomNetwork/create_custom_network (28.71s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (25.37s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-748937 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-748937 --network=bridge: (23.380011298s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-748937" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-748937
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-748937: (1.97484899s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (25.37s)

                                                
                                    
x
+
TestKicExistingNetwork (24.48s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1014 20:36:00.077058  417373 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1014 20:36:00.094470  417373 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1014 20:36:00.094561  417373 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1014 20:36:00.094589  417373 cli_runner.go:164] Run: docker network inspect existing-network
W1014 20:36:00.112186  417373 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1014 20:36:00.112224  417373 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1014 20:36:00.112240  417373 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1014 20:36:00.112498  417373 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1014 20:36:00.131186  417373 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00028e060}
I1014 20:36:00.131242  417373 network_create.go:124] attempt to create docker network existing-network 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I1014 20:36:00.131294  417373 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1014 20:36:00.189288  417373 network_create.go:108] docker network existing-network 192.168.49.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-316800 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-316800 --network=existing-network: (22.367212467s)
helpers_test.go:175: Cleaning up "existing-network-316800" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-316800
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-316800: (1.96205287s)
I1014 20:36:24.537503  417373 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (24.48s)

                                                
                                    
x
+
TestKicCustomSubnet (25.71s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-655424 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-655424 --subnet=192.168.60.0/24: (23.509761069s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-655424 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-655424" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-655424
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-655424: (2.177829045s)
--- PASS: TestKicCustomSubnet (25.71s)

                                                
                                    
x
+
TestKicStaticIP (26.44s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-708910 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-708910 --static-ip=192.168.200.200: (24.17189944s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-708910 ip
helpers_test.go:175: Cleaning up "static-ip-708910" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-708910
E1014 20:37:15.880916  417373 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-413763/.minikube/profiles/functional-744288/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-708910: (2.129511156s)
--- PASS: TestKicStaticIP (26.44s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (5.92s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-892717 --memory=3072 --mount-string /tmp/TestMountStartserial274616358/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-892717 --memory=3072 --mount-string /tmp/TestMountStartserial274616358/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.92096427s)
--- PASS: TestMountStart/serial/StartWithMountFirst (5.92s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-892717 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.32s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-915762 --memory=3072 --mount-string /tmp/TestMountStartserial274616358/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-915762 --memory=3072 --mount-string /tmp/TestMountStartserial274616358/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.320908287s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.32s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-915762 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.71s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-892717 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-892717 --alsologtostderr -v=5: (1.705902255s)
--- PASS: TestMountStart/serial/DeleteFirst (1.71s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-915762 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-915762
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-915762: (1.206244552s)
--- PASS: TestMountStart/serial/Stop (1.21s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.34s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-915762
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-915762: (6.334473932s)
--- PASS: TestMountStart/serial/RestartStopped (7.34s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-915762 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                    

Test skip (18/166)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:114: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:178: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
Copied to clipboard